[Numpy-discussion] breaking array indices

2011-06-02 Thread Mathew Yeates
Hi
I have indices into an array I'd like split so they are sequential
e.g.
[1,2,3,10,11] - [1,2,3],[10,11]

How do I do this?

-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] breaking array indices

2011-06-02 Thread Mathew Yeates
thanks. My solution was much hackier.

-Mathew

On Thu, Jun 2, 2011 at 10:27 AM, Olivier Delalleau sh...@keba.be wrote:
 I think this does what you want:

 def seq_split(x):
   r = [0] + list(numpy.where(x[1:] != x[:-1] + 1)[0] + 1) + [None]
   return [x[r[i]:r[i + 1]] for i in xrange(len(r) - 1)]

 -=- Olivier

 2011/6/2 Mathew Yeates mat.yea...@gmail.com

 Hi
 I have indices into an array I'd like split so they are sequential
 e.g.
 [1,2,3,10,11] - [1,2,3],[10,11]

 How do I do this?

 -Mathew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Windows Registry Keys

2011-05-19 Thread Mathew Yeates
Hi
I have installed a new version of Python27 in a new directory. I want to get
this info into the registry so, when I install Numpy, it will use my new
Python

TIA
-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Windows Registry Keys

2011-05-19 Thread Mathew Yeates
I *am* using the windows installer.

On Thu, May 19, 2011 at 11:14 AM, Alan G Isaac alan.is...@gmail.com wrote:
 On 5/19/2011 2:07 PM, Mathew Yeates wrote:
 I have installed a new version of Python27 in a new directory. I want to get
 this info into the registry so, when I install Numpy, it will use my new
 Python



 It probably will already.
 Did you try?
 (Assumption: you're using Windows installers.)

 Alan Isaac

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Windows Registry Keys

2011-05-19 Thread Mathew Yeates
Right. The Registry keys point to the old Python27.

On Thu, May 19, 2011 at 11:23 AM, Alan G Isaac alan.is...@gmail.com wrote:
 On 5/19/2011 2:15 PM, Mathew Yeates wrote:
 I*am*  using the windows installer.

 And you find that it does not find your most recent
 Python 2.7 install, for which you also used the
 Windows installer?

 Alan Isaac

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Windows Registry Keys

2011-05-19 Thread Mathew Yeates
cool. just what I was looking for

On Thu, May 19, 2011 at 2:15 PM, Alan G Isaac alan.is...@gmail.com wrote:
 On 5/19/2011 2:24 PM, Mathew Yeates wrote:
 The Registry keys point to the old Python27.


 Odd.  The default installation settings
 should have reset this.  Or so I believed.
 Maybe this will help?
 http://effbot.org/zone/python-register.htm

 Alan Isaac

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py on Windows, compiler options

2011-05-19 Thread Mathew Yeates
Solved. Sort of. When I compiled by hand and switched /MD to /MT it
worked. It would still be nice if I could control the compiler options
f2py passes to cl.exe

-Mathew

On Thu, May 19, 2011 at 3:05 PM, Mathew Yeates mat.yea...@gmail.com wrote:
 Hi
 I am trying to run f2py and link to some libraries.
 I get a link error
 LINK : fatal error LNK1104: cannot open file 'LIBC.lib' because (I
 think) the libraries are compiled with /MT (multithreaded).

 I tried adding /NODFEAU:TLIB:libc.lib

 but then I have unresolved dependencies. I want to try compiling the
 code with /MT. How can tell f2py to use this option when compiling
 (not linking, this option is passed to cl).

 -Mathew

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py on Windows, compiler options

2011-05-19 Thread Mathew Yeates
okay. To get it all to work I edited msvc9compiler.py and changed /MD
to /MT. This still led to an a different error. having to do with
mt.exe which does not come with MSVC 2008 Express. I fixed this
commenting out /MANIFEST stuff in msvc9compile.py

On Thu, May 19, 2011 at 6:25 PM, Mathew Yeates mat.yea...@gmail.com wrote:
 Solved. Sort of. When I compiled by hand and switched /MD to /MT it
 worked. It would still be nice if I could control the compiler options
 f2py passes to cl.exe

 -Mathew

 On Thu, May 19, 2011 at 3:05 PM, Mathew Yeates mat.yea...@gmail.com wrote:
 Hi
 I am trying to run f2py and link to some libraries.
 I get a link error
 LINK : fatal error LNK1104: cannot open file 'LIBC.lib' because (I
 think) the libraries are compiled with /MT (multithreaded).

 I tried adding /NODFEAU:TLIB:libc.lib

 but then I have unresolved dependencies. I want to try compiling the
 code with /MT. How can tell f2py to use this option when compiling
 (not linking, this option is passed to cl).

 -Mathew


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] reading tiff images

2011-04-26 Thread Mathew Yeates
Hi
What is current method of using ndiimage on a Tiff file? I've seen
different methods using ndimage itself, scipy.misc and Pil.

Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reading tiff images

2011-04-26 Thread Mathew Yeates
is scikits.image.io documented anywhere?

On Tue, Apr 26, 2011 at 11:45 AM, Zachary Pincus
zachary.pin...@yale.edu wrote:

 On Apr 26, 2011, at 2:31 PM, Daniel Lepage wrote:

 You need PIL no matter what; scipy.misc.imread, scipy.ndimage.imread,
 and scikits.image.io.imread all call PIL.

 scikits.image.io also has a ctypes wrapper for the freeimage library.
 I prefer these (well, I wrote them), though apparently there are some
 64-bit issues (crashes?). I haven't been working on a 64-bit system so
 I haven't been able to address them, but I will be soon. It's a very
 thin wrapper around a simple image IO library, so there's lots of room
 to add and extend as need be...

 All of the PIL wrappers are kluges around serious flaws in how PIL
 reads images, particularly non-8-bit images and in particular non-
 native-endian 16-bit images.

 Zach


 Theoretically there's no difference between any of them, although in
 actuality some use import Image and others use from PIL import
 Image; one of these may fail depending on how you installed PIL. (I'm
 not sure which is supposed to be standard - the PIL docs use both
 interchangeably, and I think the latest version of PIL on pypi sets it
 up so that both will work).

 I'd use whichever tool you're already importing - if you're using
 ndimage anyway, just use ndimage.imread rather than adding more
 imports.

 Note that using PIL directly is easy, but does require adding an extra
 step; OTOH, if you're familiar with PIL, you can use some of its
 transformations from the start, e.g.

 def imread(fname, mode='RGBA'):
    return np.asarray(Image.open(fname).convert(mode))

 to ensure that you always get 4-channel images, even for images that
 were initially RGB or grayscale.

 HTH,
 Dan

 On Tue, Apr 26, 2011 at 2:00 PM, Mathew Yeates
 mat.yea...@gmail.com wrote:
 Hi
 What is current method of using ndiimage on a Tiff file? I've seen
 different methods using ndimage itself, scipy.misc and Pil.

 Mathew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py pass by reference

2011-04-12 Thread Mathew Yeates
I have
subroutine foo (a)
  integer a
  print*, Hello from Fortran!
  print*, a=,a
  a=2
  end

and from python I want to do
 a=1
 foo(a)

and I want a's value to now be 2.
How do I do this?

Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py pass by reference

2011-04-12 Thread Mathew Yeates
bizarre
I get
=
 hello.foo(a)
 Hello from Fortran!
 a= 1
2
 a
1
 hello.foo(a)
 Hello from Fortran!
 a= 1
2
 print a
1

=

i.e. The value of 2 gets printed! This is numpy 1.3.0

-Mathew


On Tue, Apr 12, 2011 at 11:45 AM, Pearu Peterson
pearu.peter...@gmail.com wrote:


 On Tue, Apr 12, 2011 at 9:06 PM, Mathew Yeates mat.yea...@gmail.com wrote:

 I have
 subroutine foo (a)
      integer a
      print*, Hello from Fortran!
      print*, a=,a
      a=2
      end

 and from python I want to do
  a=1
  foo(a)

 and I want a's value to now be 2.
 How do I do this?

 With

  subroutine foo (a)
      integer a
 !f2py intent(in, out) a
      print*, Hello from Fortran!
      print*, a=,a
      a=2
      end

 you will have desired effect:

 a=1
 a = foo(a)
 print a
 2

 HTH,
 Pearu

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] constructing an array from memory

2010-09-24 Thread Mathew Yeates
I'm trying to do something ... unusual.

gdb support scripting with Python. From within my python script, I can
get the address of a contiguous area of memory that stores a  fortran
array. I want to creat a NumPy array using frombuffer. I see that
the CPython API supports the creation of a buffer, but, is there an
easier, more direct, way?

-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] constructing an array from memory

2010-09-24 Thread Mathew Yeates
Thank a lot. I was wading through the Python C API. This is much simpler.

-Mathew

On Fri, Sep 24, 2010 at 10:21 AM, Zachary Pincus
zachary.pin...@yale.edu wrote:
 I'm trying to do something ... unusual.

 gdb support scripting with Python. From within my python script, I can
 get the address of a contiguous area of memory that stores a  fortran
 array. I want to creat a NumPy array using frombuffer. I see that
 the CPython API supports the creation of a buffer, but, is there an
 easier, more direct, way?

 Here's how I do a similar task:
 numpy.ndarray(shape, dtype=dtype,
 buffer=(ctypes.c_char*size_in_bytes).from_address(address))

 You may need the strides or order parameters, as well.

 Perhaps there's an easier way to create a buffer from an integer
 memory address, but this seems pretty straightforward.

 Zach
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 2D binning

2010-06-02 Thread Mathew Yeates
thanks. I am also getting an error in ndi.mean
Were you getting the error
RuntimeError: data type not supported?

-Mathew


On Wed, Jun 2, 2010 at 9:40 AM, Wes McKinney wesmck...@gmail.com wrote:

 On Wed, Jun 2, 2010 at 3:41 AM, Vincent Schut sc...@sarvision.nl wrote:
  On 06/02/2010 04:52 AM, josef.p...@gmail.com wrote:
  On Tue, Jun 1, 2010 at 9:57 PM, Zachary Pincuszachary.pin...@yale.edu
  wrote:
  I guess it's as fast as I'm going to get. I don't really see any
  other way. BTW, the lat/lons are integers)
 
  You could (in c or cython) try a brain-dead hashtable with no
  collision detection:
 
  for lat, long, data in dataset:
 bin = (lat ^ long) % num_bins
 hashtable[bin] = update_incremental_mean(hashtable[bin], data)
 
  you'll of course want to do some experiments to see if your data are
  sufficiently sparse and/or you can afford a large enough hashtable
  array that you won't get spurious hash collisions. Adding error-
  checking to ensure that there are no collisions would be pretty
  trivial (just keep a table of the lat/long for each hash value, which
  you'll need anyway, and check that different lat/long pairs don't get
  assigned the same bin).
 
  Zach
 
 
 
  -Mathew
 
  On Tue, Jun 1, 2010 at 1:49 PM, Zachary Pincus
 zachary.pin...@yale.edu
  wrote:
  Hi
  Can anyone think of a clever (non-lopping) solution to the
  following?
 
  A have a list of latitudes, a list of longitudes, and list of data
  values. All lists are the same length.
 
  I want to compute an average  of data values for each lat/lon pair.
  e.g. if lat[1001] lon[1001] = lat[2001] [lon [2001] then
  data[1001] = (data[1001] + data[2001])/2
 
  Looping is going to take wa to long.
 
  As a start, are the equal lat/lon pairs exactly equal (i.e. either
  not floating-point, or floats that will always compare equal, that is,
  the floating-point bit-patterns will be guaranteed to be identical) or
  approximately equal to float tolerance?
 
  If you're in the approx-equal case, then look at the KD-tree in scipy
  for doing near-neighbors queries.
 
  If you're in the exact-equal case, you could consider hashing the lat/
  lon pairs or something. At least then the looping is O(N) and not
  O(N^2):
 
  import collections
  grouped = collections.defaultdict(list)
  for lt, ln, da in zip(lat, lon, data):
 grouped[(lt, ln)].append(da)
 
  averaged = dict((ltln, numpy.mean(da)) for ltln, da in
  grouped.items())
 
  Is that fast enough?
 
  If the lat lon can be converted to a 1d label as Wes suggested, then
  in a similar timing exercise ndimage was the fastest.
  http://mail.scipy.org/pipermail/scipy-user/2009-February/019850.html
 
  And as you said your lats and lons are integers, you could simply do
 
  ll = lat*1000 + lon
 
  to get unique 'hashes' or '1d labels' for you latlon pairs, as a lat or
  lon will never exceed 360 (degrees).
 
  After that, either use the ndimage approach, or you could use
  histogramming with weighting by data values and divide by histogram
  withouth weighting, or just loop.
 
  Vincent
 
 
  (this was for python 2.4, also later I found np.bincount which
  requires that the labels are consecutive integers, but is as fast as
  ndimage)
 
  I don't know how it would compare to the new suggestions.
 
  Josef
 
 
 
 
  Zach
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

 I was curious about how fast ndimage was for this operation so here's
 the complete function.

 import scipy.ndimage as ndi

 N = 1

 lat = np.random.randint(0, 360, N)
 lon = np.random.randint(0, 360, N)
 data = np.random.randn(N)

 def group_mean(lat, lon, data):
indexer = np.lexsort((lon, lat))
lat = lat.take(indexer)
lon = lon.take(indexer)
sorted_data = data.take(indexer)

keys = 1000 * lat + lon
unique_keys = np.unique(keys)

result = ndi.mean(sorted_data, labels=keys, index=unique_keys)
decoder = keys.searchsorted(unique_keys)

return dict(zip(zip(lat.take(decoder), lon.take(decoder)), result))

 Appears to be about 13x faster (and could be made faster still) than
 the naive version on my machine:

 def group_mean_naive(lat, lon, data):
 grouped = collections.defaultdict(list)
for lt, ln, da in zip(lat, lon, data):
  grouped[(lt, ln)].append(da)

 averaged = dict((ltln, np.mean(da)) for ltln, da in grouped.items())


Re: [Numpy-discussion] 2D binning

2010-06-02 Thread Mathew Yeates
I'm on Windows, using a precompiled binary. I never built numpy/scipy on
Windows.

On Wed, Jun 2, 2010 at 10:45 AM, Wes McKinney wesmck...@gmail.com wrote:

 On Wed, Jun 2, 2010 at 1:23 PM, Mathew Yeates mat.yea...@gmail.com
 wrote:
  thanks. I am also getting an error in ndi.mean
  Were you getting the error
  RuntimeError: data type not supported?
 
  -Mathew
 
  On Wed, Jun 2, 2010 at 9:40 AM, Wes McKinney wesmck...@gmail.com
 wrote:
 
  On Wed, Jun 2, 2010 at 3:41 AM, Vincent Schut sc...@sarvision.nl
 wrote:
   On 06/02/2010 04:52 AM, josef.p...@gmail.com wrote:
   On Tue, Jun 1, 2010 at 9:57 PM, Zachary Pincus
 zachary.pin...@yale.edu
wrote:
   I guess it's as fast as I'm going to get. I don't really see any
   other way. BTW, the lat/lons are integers)
  
   You could (in c or cython) try a brain-dead hashtable with no
   collision detection:
  
   for lat, long, data in dataset:
  bin = (lat ^ long) % num_bins
  hashtable[bin] = update_incremental_mean(hashtable[bin], data)
  
   you'll of course want to do some experiments to see if your data are
   sufficiently sparse and/or you can afford a large enough hashtable
   array that you won't get spurious hash collisions. Adding error-
   checking to ensure that there are no collisions would be pretty
   trivial (just keep a table of the lat/long for each hash value,
 which
   you'll need anyway, and check that different lat/long pairs don't
 get
   assigned the same bin).
  
   Zach
  
  
  
   -Mathew
  
   On Tue, Jun 1, 2010 at 1:49 PM, Zachary
   Pincuszachary.pin...@yale.edu
   wrote:
   Hi
   Can anyone think of a clever (non-lopping) solution to the
   following?
  
   A have a list of latitudes, a list of longitudes, and list of data
   values. All lists are the same length.
  
   I want to compute an average  of data values for each lat/lon
 pair.
   e.g. if lat[1001] lon[1001] = lat[2001] [lon [2001] then
   data[1001] = (data[1001] + data[2001])/2
  
   Looping is going to take wa to long.
  
   As a start, are the equal lat/lon pairs exactly equal (i.e.
 either
   not floating-point, or floats that will always compare equal, that
   is,
   the floating-point bit-patterns will be guaranteed to be identical)
   or
   approximately equal to float tolerance?
  
   If you're in the approx-equal case, then look at the KD-tree in
 scipy
   for doing near-neighbors queries.
  
   If you're in the exact-equal case, you could consider hashing the
   lat/
   lon pairs or something. At least then the looping is O(N) and not
   O(N^2):
  
   import collections
   grouped = collections.defaultdict(list)
   for lt, ln, da in zip(lat, lon, data):
  grouped[(lt, ln)].append(da)
  
   averaged = dict((ltln, numpy.mean(da)) for ltln, da in
   grouped.items())
  
   Is that fast enough?
  
   If the lat lon can be converted to a 1d label as Wes suggested, then
   in a similar timing exercise ndimage was the fastest.
   http://mail.scipy.org/pipermail/scipy-user/2009-February/019850.html
  
   And as you said your lats and lons are integers, you could simply do
  
   ll = lat*1000 + lon
  
   to get unique 'hashes' or '1d labels' for you latlon pairs, as a lat
 or
   lon will never exceed 360 (degrees).
  
   After that, either use the ndimage approach, or you could use
   histogramming with weighting by data values and divide by histogram
   withouth weighting, or just loop.
  
   Vincent
  
  
   (this was for python 2.4, also later I found np.bincount which
   requires that the labels are consecutive integers, but is as fast as
   ndimage)
  
   I don't know how it would compare to the new suggestions.
  
   Josef
  
  
  
  
   Zach
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
  
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
 
  I was curious about how fast ndimage was for this operation so here's
  the complete function.
 
  import scipy.ndimage as ndi
 
  N = 1
 
  lat = np.random.randint(0, 360, N)
  lon = np.random.randint(0, 360, N)
  data = np.random.randn(N)
 
  def group_mean(lat, lon, data):
 indexer = np.lexsort((lon, lat))
 lat = lat.take(indexer)
 lon = lon.take(indexer)
 sorted_data = data.take(indexer)
 
 keys = 1000 * lat + lon
 unique_keys = np.unique(keys)
 
 result = ndi.mean(sorted_data, labels=keys, index=unique_keys)
 decoder = keys.searchsorted(unique_keys)
 
 return dict

Re: [Numpy-discussion] 2D binning

2010-06-02 Thread Mathew Yeates
Nope. This version didn't work either.



 If you're on Python 2.6 the binary on here might work for you:

 http://www.lfd.uci.edu/~gohlke/pythonlibs/

 It looks recent enough to have the rewritten ndimage
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 2D binning

2010-06-01 Thread Mathew Yeates
Hi
Can anyone think of a clever (non-lopping) solution to the following?

A have a list of latitudes, a list of longitudes, and list of data values.
All lists are the same length.

I want to compute an average  of data values for each lat/lon pair. e.g. if
lat[1001] lon[1001] = lat[2001] [lon [2001] then
data[1001] = (data[1001] + data[2001])/2

Looping is going to take wa to long.

Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 2D binning

2010-06-01 Thread Mathew Yeates
I guess it's as fast as I'm going to get. I don't really see any other way.
BTW, the lat/lons are integers)

-Mathew

On Tue, Jun 1, 2010 at 1:49 PM, Zachary Pincus zachary.pin...@yale.eduwrote:

  Hi
  Can anyone think of a clever (non-lopping) solution to the following?
 
  A have a list of latitudes, a list of longitudes, and list of data
  values. All lists are the same length.
 
  I want to compute an average  of data values for each lat/lon pair.
  e.g. if lat[1001] lon[1001] = lat[2001] [lon [2001] then
  data[1001] = (data[1001] + data[2001])/2
 
  Looping is going to take wa to long.

 As a start, are the equal lat/lon pairs exactly equal (i.e. either
 not floating-point, or floats that will always compare equal, that is,
 the floating-point bit-patterns will be guaranteed to be identical) or
 approximately equal to float tolerance?

 If you're in the approx-equal case, then look at the KD-tree in scipy
 for doing near-neighbors queries.

 If you're in the exact-equal case, you could consider hashing the lat/
 lon pairs or something. At least then the looping is O(N) and not
 O(N^2):

 import collections
 grouped = collections.defaultdict(list)
 for lt, ln, da in zip(lat, lon, data):
   grouped[(lt, ln)].append(da)

 averaged = dict((ltln, numpy.mean(da)) for ltln, da in grouped.items())

 Is that fast enough?

 Zach
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] matplotlib is breaking numpy

2009-11-19 Thread Mathew Yeates
There is definitely something wrong with matplotlib/numpy. Consider the
following
from numpy import *
mydata=memmap('map.dat',dtype=float64,mode='w+',shape=56566500)
 del mydata

I can now remove the file map.dat with (from the command line) $rm map.dat

However
If I plot  mydata before the line
 del mydata


I can't get rid of the file until I exit python!!
Does matplotlib keep a reference to the data? How can I remove this
reference?

Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matplotlib is breaking numpy

2009-11-19 Thread Mathew Yeates
Yeah, I tried that.

Here's what I'm doing. I have an application which displays different
dataset which a user selects from a drop down list. I want to overwrite the
existing plot with a new one. I've tried deleting just about everything get
matplotlib to let go of my data!

Mathew

On Thu, Nov 19, 2009 at 10:30 AM, John Hunter jdh2...@gmail.com wrote:





 On Nov 19, 2009, at 11:57 AM, Robert Kern robert.k...@gmail.com wrote:

  On Thu, Nov 19, 2009 at 11:52, Mathew Yeates mat.yea...@gmail.com
  wrote:
  There is definitely something wrong with matplotlib/numpy. Consider
  the
  following
  from numpy import *
  mydata=memmap('map.dat',dtype=float64,mode='w+',shape=56566500)
  del mydata
 
  I can now remove the file map.dat with (from the command line) $rm
  map.dat
 
  However
  If I plot  mydata before the line
  del mydata
 
 
  I can't get rid of the file until I exit python!!
  Does matplotlib keep a reference to the data?
 
  Almost certainly.
 
  How can I remove this
  reference?
 
  Probably by deleting the plot objects that were created and close all
  matplotlib windows referencing the data. If you are using IPython, you
  should know that many of the returned objects are kept in Out, so you
  will need to clear that. There might be some more places internal to
  matplotlib, I don't know.
 

 Closing the figure window containg the data *should* be enough. In
 pylab/pyplot, this also triggers a call to gc.collect.




  With some care, you can use gc.get_referrers() to find the objects
  that are holding direct references to your memmap.
 
  --
  Robert Kern
 
  I have come to believe that the whole world is an enigma, a harmless
  enigma that is made terrible by our own mad attempt to interpret it as
  though it had an underlying truth.
   -- Umberto Eco
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matplotlib is breaking numpy

2009-11-19 Thread Mathew Yeates
I am running my gtk app from python. I am deleting the canvas and running
gc.collect(). I still seem to have a reference to my memmapped data.

Any other hints?

-Mathew

On Thu, Nov 19, 2009 at 10:42 AM, John Hunter jdh2...@gmail.com wrote:





 On Nov 19, 2009, at 12:35 PM, Mathew Yeates mat.yea...@gmail.com wrote:

 Yeah, I tried that.

 Here's what I'm doing. I have an application which displays different
 dataset which a user selects from a drop down list. I want to overwrite the
 existing plot with a new one. I've tried deleting just about everything get
 matplotlib to let go of my data!



 What is everything?  Are you using pyplot or are you embedding mpl in a
 GUI?  If the latter, are you deleting the FigureCanvas?  You will also need
 to call gc.collect after deleting the mpl objects because we use a lot of
 circular references. Pyplot close does this automatically, but this does not
 apply to embedding.

 How are you running you app?  From the shell or IPython?




 Mathew

 On Thu, Nov 19, 2009 at 10:30 AM, John Hunter  jdh2...@gmail.com
 jdh2...@gmail.com wrote:





 On Nov 19, 2009, at 11:57 AM, Robert Kern  robert.k...@gmail.com
 robert.k...@gmail.com wrote:

  On Thu, Nov 19, 2009 at 11:52, Mathew Yeates  mat.yea...@gmail.com
 mat.yea...@gmail.com
  wrote:
  There is definitely something wrong with matplotlib/numpy. Consider
  the
  following
  from numpy import *
  mydata=memmap('map.dat',dtype=float64,mode='w+',shape=56566500)
  del mydata
 
  I can now remove the file map.dat with (from the command line) $rm
  map.dat
 
  However
  If I plot  mydata before the line
  del mydata
 
 
  I can't get rid of the file until I exit python!!
  Does matplotlib keep a reference to the data?
 
  Almost certainly.
 
  How can I remove this
  reference?
 
  Probably by deleting the plot objects that were created and close all
  matplotlib windows referencing the data. If you are using IPython, you
  should know that many of the returned objects are kept in Out, so you
  will need to clear that. There might be some more places internal to
  matplotlib, I don't know.
 

 Closing the figure window containg the data *should* be enough. In
 pylab/pyplot, this also triggers a call to gc.collect.




  With some care, you can use gc.get_referrers() to find the objects
  that are holding direct references to your memmap.
 
  --
  Robert Kern
 
  I have come to believe that the whole world is an enigma, a harmless
  enigma that is made terrible by our own mad attempt to interpret it as
  though it had an underlying truth.
   -- Umberto Eco
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.orgNumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
  NumPy-Discussion@scipy.orgNumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matplotlib is breaking numpy

2009-11-19 Thread Mathew Yeates
yes, a GTK app from the python shell. And not using the toolbar.
I'll see if I can extract out a sample of code that demonstrates the problem
I'm having.

Thx
Mathew

On Thu, Nov 19, 2009 at 10:56 AM, John Hunter jdh2...@gmail.com wrote:





 On Nov 19, 2009, at 12:53 PM, Mathew Yeates mat.yea...@gmail.com wrote:

 I am running my gtk app from python. I am deleting the canvas and running
 gc.collect(). I still seem to have a reference to my memmapped data.

 Any other hints?


 Gtk app from the standard python shell?

 Are you using the mpl toolbar?  It keeps a ref to the canvas. If you can
 create a small freestanding example, that would help




 -Mathew

 On Thu, Nov 19, 2009 at 10:42 AM, John Hunter  jdh2...@gmail.com
 jdh2...@gmail.com wrote:





 On Nov 19, 2009, at 12:35 PM, Mathew Yeates  mat.yea...@gmail.com
 mat.yea...@gmail.com wrote:

 Yeah, I tried that.

 Here's what I'm doing. I have an application which displays different
 dataset which a user selects from a drop down list. I want to overwrite the
 existing plot with a new one. I've tried deleting just about everything get
 matplotlib to let go of my data!



 What is everything?  Are you using pyplot or are you embedding mpl in a
 GUI?  If the latter, are you deleting the FigureCanvas?  You will also need
 to call gc.collect after deleting the mpl objects because we use a lot of
 circular references. Pyplot close does this automatically, but this does not
 apply to embedding.

 How are you running you app?  From the shell or IPython?




 Mathew

 On Thu, Nov 19, 2009 at 10:30 AM, John Hunter  
 jdh2...@gmail.comjdh2...@gmail.com
 jdh2...@gmail.com wrote:





 On Nov 19, 2009, at 11:57 AM, Robert Kern  
 robert.k...@gmail.comrobert.k...@gmail.com
 robert.k...@gmail.com wrote:

  On Thu, Nov 19, 2009 at 11:52, Mathew Yeates  
  mat.yea...@gmail.commat.yea...@gmail.com
 mat.yea...@gmail.com
  wrote:
  There is definitely something wrong with matplotlib/numpy. Consider
  the
  following
  from numpy import *
  mydata=memmap('map.dat',dtype=float64,mode='w+',shape=56566500)
  del mydata
 
  I can now remove the file map.dat with (from the command line) $rm
  map.dat
 
  However
  If I plot  mydata before the line
  del mydata
 
 
  I can't get rid of the file until I exit python!!
  Does matplotlib keep a reference to the data?
 
  Almost certainly.
 
  How can I remove this
  reference?
 
  Probably by deleting the plot objects that were created and close all
  matplotlib windows referencing the data. If you are using IPython, you
  should know that many of the returned objects are kept in Out, so you
  will need to clear that. There might be some more places internal to
  matplotlib, I don't know.
 

 Closing the figure window containg the data *should* be enough. In
 pylab/pyplot, this also triggers a call to gc.collect.




  With some care, you can use gc.get_referrers() to find the objects
  that are holding direct references to your memmap.
 
  --
  Robert Kern
 
  I have come to believe that the whole world is an enigma, a harmless
  enigma that is made terrible by our own mad attempt to interpret it as
  though it had an underlying truth.
   -- Umberto Eco
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org NumPy-Discussion@scipy.org
 NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussionhttp://mail.scipy.org/mailman/listinfo/numpy-discussion
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org NumPy-Discussion@scipy.org
 NumPy-Discussion@scipy.org
  
 http://mail.scipy.org/mailman/listinfo/numpy-discussionhttp://mail.scipy.org/mailman/listinfo/numpy-discussion
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.orgNumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
  NumPy-Discussion@scipy.orgNumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] matplotlib and numpy cause MemoryError

2009-11-18 Thread Mathew Yeates
Hi

I have a line of matplotlib code

-self.ax.plot(plot_data,mif)



that causes the line

-self.data=numpy.zeros(shape=dims)



to throw a MemoryError exception.

(if I comment out the first line I get no error.)


This is on a windows xp machine with latest numpy and the latest matplotlib.



I have a feeling this may be a nightmare to figure out what matplotlib
and/or numpy are doing wrong. Any ideas where I can start?



Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matplotlib and numpy cause MemoryError

2009-11-18 Thread Mathew Yeates
The value of dims is constant and not particularly large. I also checked to
make sure I wasn't running out of memory. Are there other reasons for this
error?

Mathew

On Wed, Nov 18, 2009 at 1:51 PM, Robert Kern robert.k...@gmail.com wrote:

 On Wed, Nov 18, 2009 at 15:48, Mathew Yeates mat.yea...@gmail.com wrote:
  Hi
 
  I have a line of matplotlib code
 
  -self.ax.plot(plot_data,mif)
 
 
 
  that causes the line
 
  -self.data=numpy.zeros(shape=dims)
 
 
 
  to throw a MemoryError exception.
 
  (if I comment out the first line I get no error.)
 
  This is on a windows xp machine with latest numpy and the latest
 matplotlib.
 
 
 
  I have a feeling this may be a nightmare to figure out what matplotlib
  and/or numpy are doing wrong. Any ideas where I can start?

 Print out dims just before the second line to make sure that it is
 reasonable. A MemoryError is raised when numpy cannot allocate enough
 memory on your system. If dims is too large for some reason, you could
 run into that limit. It might be because what you are trying to plot
 is simply too large or there might possibly (but unlikely) be a bug
 that is miscalculating dims.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matplotlib and numpy cause MemoryError

2009-11-18 Thread Mathew Yeates
also, the exception is only thrown when I plot something first. I wonder if
matplotlib is messing something up.

On Wed, Nov 18, 2009 at 2:13 PM, Mathew Yeates mat.yea...@gmail.com wrote:

 The value of dims is constant and not particularly large. I also checked to
 make sure I wasn't running out of memory. Are there other reasons for this
 error?

 Mathew


 On Wed, Nov 18, 2009 at 1:51 PM, Robert Kern robert.k...@gmail.comwrote:

 On Wed, Nov 18, 2009 at 15:48, Mathew Yeates mat.yea...@gmail.com
 wrote:
  Hi
 
  I have a line of matplotlib code
 
  -self.ax.plot(plot_data,mif)
 
 
 
  that causes the line
 
  -self.data=numpy.zeros(shape=dims)
 
 
 
  to throw a MemoryError exception.
 
  (if I comment out the first line I get no error.)
 
  This is on a windows xp machine with latest numpy and the latest
 matplotlib.
 
 
 
  I have a feeling this may be a nightmare to figure out what matplotlib
  and/or numpy are doing wrong. Any ideas where I can start?

 Print out dims just before the second line to make sure that it is
 reasonable. A MemoryError is raised when numpy cannot allocate enough
 memory on your system. If dims is too large for some reason, you could
 run into that limit. It might be because what you are trying to plot
 is simply too large or there might possibly (but unlikely) be a bug
 that is miscalculating dims.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matplotlib and numpy cause MemoryError

2009-11-18 Thread Mathew Yeates
I turns out I *was* running out of memory. My dimensions would require 3.5
gig  and my plot must have used up some memory.




On Wed, Nov 18, 2009 at 2:43 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Wed, Nov 18, 2009 at 3:13 PM, Mathew Yeates mat.yea...@gmail.comwrote:

 The value of dims is constant and not particularly large.


 Yes, but what are they?


 I also checked to make sure I wasn't running out of memory. Are there
 other reasons for this error?


 If there is a memory error, no memory is used.

 What versions of numpy/matplotlib are you using? Is xp 32 bit or 64 bits?

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] memmap limits

2009-11-18 Thread Mathew Yeates
What limits are there on file size when using memmap?

-Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] memmap limits

2009-11-18 Thread Mathew Yeates
for a 64 bit machine does this mean I am limited to 4 GB?

-Mathew

On Wed, Nov 18, 2009 at 3:48 PM, Robert Kern robert.k...@gmail.com wrote:

 On Wed, Nov 18, 2009 at 17:43, Mathew Yeates mat.yea...@gmail.com wrote:
  What limits are there on file size when using memmap?

 With a modern filesystem, usually you are only limited to the amount
 of contiguous free space in your process's current address space.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hairy optimization problem

2009-05-07 Thread Mathew Yeates
David Huard wrote:
 Hi Mathew,

 You could use Newton's method to optimize for each vi sequentially. If 
 you have an expression for the jacobian, it's even better.
Here's the problem. Every time f  is evaluated, it returns a set of 
values. (a row in the matrix) But if we are trying to find the minimum 
of the first column, we only care about the first value in the set. This 
is really N optimization. problems I want to perform simultaneously.

Find N (x,y) values where x1,y1 minimizes f in the first column, x2,y2 
minimizes f in the second column, etc.
And ... doing this a column at a time is too slow (I just did a quick 
calculation and my brute force method is going to take 30 days!)



 What I'd do is write a class with a method f(self, x, y) that records 
 the result of f(x,y) each time it is called. I would  then sample very 
 coarsely the x,y space where I guess my solutions are. You can then 
 select the x,y where v1 is maximum as your initial point for Newton's 
 method and iterate until you converge to the solution for v1. Since 
 during the search for the optimum your class stores the computed 
 points, your initial guess for v2 should be a bit better than it was 
 for v1, which should speed up the convergence to the solution for v2, 
 etc.

 If you have multiple processors available, you can scatter function 
 evaluation among them using ipython. It's easier than it looks.

 Hope someone comes up with a nicer solution,

 David

 On Wed, May 6, 2009 at 3:16 PM, Mathew Yeates myea...@jpl.nasa.gov 
 mailto:myea...@jpl.nasa.gov wrote:

 I have a function f(x,y) which produces N values [v1,v2,v3  vN]
 where some of the values are None (only found after evaluation)

 each evaluation of f is expensive and N is large.
 I want N x,y pairs which produce the optimal value in each column.

 A brute force approach would be to generate
 [v11,v12,v13,v14 ]
 [v21,v22,v23 ...]
 etc

 then locate the maximum of each column.
 This is far too slow ..Any other ideas?



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org mailto:Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hairy optimization problem

2009-05-07 Thread Mathew Yeates
Sebastian Walter wrote:
 N optimization problems. This is very unusual! Typically the problem
 at hand can be formulated as *one* optimization problem.

   
yes, this is really not so much an optimization problem as it is a 
vectorization problem.
I am trying to avoid
1) Evaluate f over and over and find the maximum in the first column. 
Store solution 1.
2) Evaluate f over and over and find the max in the second column. Store 
solution 2.
Rinse, Repeat
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hairy optimization problem

2009-05-07 Thread Mathew Yeates

Thanks Ken,
I was actually thinking about using caching while on my way into work. 
Might work. Beats the heck out of using brute force. One other question 
(maybe I should ask in another thread) what is the canonical method for 
dealing with missing values?

Suppose f(x,y) returns None for some (x,y) pairs (unknown until 
evaluation). I don't like the idea of setting the return to some small  
value as this may create local maxima in the solution space.

Mathew

Ken Basye wrote:
 Hi Mathew,
Here are some things to think about:  First, is there a way to decompose
 'f' so that it computes only one or a subset of K values, but in 1/N ( K/N)
 time?  If so, you can decompose your problem into N single optimizations. 
 Presumably not, but I think it's worth asking.  Second, what method would
 you use
 if you were only trying to solve the problem for one column?  
I'm thinking about a heuristic solution involving caching, which is close
 to what an earlier poster suggested.  The idea is to cache complete (length
 N) results for each call you make.  Whenever you need to compute f(x,y),
 consult the cache to see if there's a result for any point within D of x,y
 (look up nearest neighbor search).  Here D is a configurable parameter
 which will trade off the accuracy of your optimization against time.  If
 there is, use the cached value instead of calling f.  Now you just do the
 rinse-repeat algorithm, but it should get progressively faster (per
 column) as you get more and more cache hits.
   Possible augmentations:  1) Within the run for a given column, adjust D
 downward as the optimization progresses so you don't reach a fixed-point
 to early.  Trades time for optimization accuracy.2) When finished, the
 cache should have good values for each column which were found on the pass
 for that column, but there's no reason not to scan the entire cache one last
 time to see if a later pass stumbled on a better value for an earlier
 column.  3) Iterate the entire procedure, using each iteration to seed the
 starting locations for the next - might be useful if your function has many
 local minima in some of the N output dimensions.



 Mathew Yeates wrote:
   
 Sebastian Walter wrote:
 
 N optimization problems. This is very unusual! Typically the problem
 at hand can be formulated as *one* optimization problem.

   
   
 yes, this is really not so much an optimization problem as it is a 
 vectorization problem.
 I am trying to avoid
 1) Evaluate f over and over and find the maximum in the first column. 
 Store solution 1.
 2) Evaluate f over and over and find the max in the second column. Store 
 solution 2.
 Rinse, Repeat
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] optimization when there are mssing values

2009-05-07 Thread Mathew Yeates
What is the canonical method for 
dealing with missing values?

Suppose f(x,y) returns None for some (x,y) pairs (unknown until 
evaluation). I don't like the idea of setting the return to some small  
value as this may create local maxima in the solution space.

So any of the scipy packages deal with this?

Mathew

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] hairy optimization problem

2009-05-06 Thread Mathew Yeates
I have a function f(x,y) which produces N values [v1,v2,v3  vN] 
where some of the values are None (only found after evaluation)

each evaluation of f is expensive and N is large.
I want N x,y pairs which produce the optimal value in each column.

A brute force approach would be to generate
[v11,v12,v13,v14 ]
[v21,v22,v23 ...]
etc

then locate the maximum of each column.
This is far too slow ..Any other ideas?



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] help with vectorization

2009-04-27 Thread Mathew Yeates
I know this must be trivial but I can't seem to get it right

I have N 2x2 arrays which perform a rotation. I also have N xy pairs to 
transpose. What is the simplest way to perform the transformation 
without looping?

Thanks from someone about to punch their screen.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help with vectorization

2009-04-27 Thread Mathew Yeates
I should add, I'm starting with N rotation angles. So I should rephrase  
and say I'm starting with N angles and N xy pairs.



Mathew Yeates wrote:
 I know this must be trivial but I can't seem to get it right

 I have N 2x2 arrays which perform a rotation. I also have N xy pairs to 
 transpose. What is the simplest way to perform the transformation 
 without looping?

 Thanks from someone about to punch their screen.
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance issue (again)

2009-04-22 Thread Mathew Yeates
well, this isn't a perfect solution. polyfit  is better because it 
determines rank based on condition values. Finds the eigenvalues ... 
etc. But, unless it can vectorized without Python looping, it's too slow 
for me to use

Mathew

josef.p...@gmail.com wrote:



 If you remove the mean from x and y (along axis = 1) then can't you
 just do something like

 (x*y).sum(1) / (x*x).sum(axis=1)



 I think that's what I said 8 days ago.

 Josef


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] performance issue (again)

2009-04-21 Thread Mathew Yeates
Hi
I posted something about this earlier

Say I have 2 arrays X and Y with shapes (N,3) where N is large
I am doing the following

for row in range(N):
result=polyfit(X[row,:],Y[row,:],1,full=True) # fit 3 points with a line

This takes forever and I was hoping to find a way to speed things up. 
But now I'm starting to wonder if this pointless. If the routine poly 
fit takes a  long time, when compared with the time for a Python 
function call, then things can't be sped up.

Any comments?

Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance issue (again)

2009-04-21 Thread Mathew Yeates
sheer genius. Done in the blink of an eye and my original was taking 20 
minutes!


Keith Goodman wrote:
 On 4/21/09, Mathew Yeates myea...@jpl.nasa.gov wrote:
   
 Hi
 I posted something about this earlier

 Say I have 2 arrays X and Y with shapes (N,3) where N is large
 I am doing the following

 for row in range(N):
 result=polyfit(X[row,:],Y[row,:],1,full=True) # fit 3 points with a line

 This takes forever and I was hoping to find a way to speed things up.
 But now I'm starting to wonder if this pointless. If the routine poly
 fit takes a  long time, when compared with the time for a Python
 function call, then things can't be sped up.

 Any comments?
 

 If you remove the mean from x and y (along axis = 1) then can't you
 just do something like

 (x*y).sum(1) / (x*x).sum(axis=1)
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance issue (again)

2009-04-21 Thread Mathew Yeates
sheer genius. Done in the blink of an eye and my original was taking 20 
minutes!


Keith Goodman wrote:
 On 4/21/09, Mathew Yeates myea...@jpl.nasa.gov wrote:
   
 Hi
 I posted something about this earlier

 Say I have 2 arrays X and Y with shapes (N,3) where N is large
 I am doing the following

 for row in range(N):
 result=polyfit(X[row,:],Y[row,:],1,full=True) # fit 3 points with a line

 This takes forever and I was hoping to find a way to speed things up.
 But now I'm starting to wonder if this pointless. If the routine poly
 fit takes a  long time, when compared with the time for a Python
 function call, then things can't be sped up.

 Any comments?
 

 If you remove the mean from x and y (along axis = 1) then can't you
 just do something like

 (x*y).sum(1) / (x*x).sum(axis=1)
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance issue (again)

2009-04-21 Thread Mathew Yeates
sheer genius. Done in the blink of an eye and my original was taking 20 
minutes!


Keith Goodman wrote:
 On 4/21/09, Mathew Yeates myea...@jpl.nasa.gov wrote:
   
 Hi
 I posted something about this earlier

 Say I have 2 arrays X and Y with shapes (N,3) where N is large
 I am doing the following

 for row in range(N):
 result=polyfit(X[row,:],Y[row,:],1,full=True) # fit 3 points with a line

 This takes forever and I was hoping to find a way to speed things up.
 But now I'm starting to wonder if this pointless. If the routine poly
 fit takes a  long time, when compared with the time for a Python
 function call, then things can't be sped up.

 Any comments?
 

 If you remove the mean from x and y (along axis = 1) then can't you
 just do something like

 (x*y).sum(1) / (x*x).sum(axis=1)
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] problem using optimized libraries

2009-04-14 Thread Mathew Yeates
Hi
The line
from _dotblas import dot . is giving me an import error  when  I 
looked at the symbols in _dotblas.so I only see things like CFLOAT_dot. 
When I trace the system calls I see that my optimized ATLAS libraries 
are being accessed, but immediately after opening libatlas.so I get an 
Import Error resulting in an unoptimized dot

Anyone have any ideas?

Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] polyfit on multiple data points

2009-04-13 Thread Mathew Yeates
Hi,
I understand how to fit  the points (x1,y1) (x2,y2),(x3,y3) with a line 
using polyfit. But, what if I want to perform this task on every row of 
an array?
For instance

[[x1,x2,x3],
 [s1,s2,s3]]

[[y1,y2,y3,],
 [r1,r2,r3]]

and I want the results to be the coefficients  [a,b,c]  and [d,e,f] where
[a,b,c] fits the points (x1,y1) (x2,y2),(x3,y3) and
[d,e,f] fits the points (s1,r1) (s2,r2),(s3,r3)

I realize I could use apply_along_axis but I'm afraid of the 
performance penalty. Is there a way to do this without resorting to a 
function call for each row?

Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] help vectorizing something

2008-10-24 Thread Mathew Yeates
Hi
I  have 2 vectors A and B. For each value in A I want to find the location
in B of the same value. Both A and B have unique elements.

Of course I could something like
For each index of A:
   v =A[index]
   location = numpy.where(B == v)

But I have very large lists and it will take too long.

Thanks to any one of you  vectorization gurus that has any ideas.

Mathew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help vectorizing something

2008-10-24 Thread Mathew Yeates
h. I don't understand the result.

If
a=array([ 1,  2,  3,  7, 10]) and b=array([ 1,  2,  3,  8, 10])

I want to get the result [0,1,2,4] but[searchsorted(a,b) produces
[0,1,2,4,4] ?? and searchsorted(b,a) produces [0,1,2,3,4]

??
Mathew


On Fri, Oct 24, 2008 at 3:12 PM, Charles R Harris [EMAIL PROTECTED]
 wrote:



 On Fri, Oct 24, 2008 at 3:48 PM, Mathew Yeates [EMAIL PROTECTED]wrote:

 Hi
 I  have 2 vectors A and B. For each value in A I want to find the location
 in B of the same value. Both A and B have unique elements.

 Of course I could something like
 For each index of A:
v =A[index]
location = numpy.where(B == v)

 But I have very large lists and it will take too long.


 In [1]: A = array([1,2,3])

 In [2]: B = array([5,1,3,0,2,4])

 In [3]: i = B.argsort()

 In [4]: Bsorted = B[i]

 In [5]: indices = i[searchsorted(Bsorted,A)]

 In [6]: indices
 Out[6]: array([1, 4, 2])

 Chuck



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] how to tell if a point is inside a polygon

2008-10-13 Thread Mathew Yeates
Is there a routine in scipy for telling whether  a point is inside a 
convex 4 sided polygon?

Mathew

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] test errors with numpy-1.1.0

2008-08-13 Thread Mathew Yeates
On an AMD x86_64 with ATLAS installed I am getting errors like
ValueError: On entry to DLASD0 parameter number 9 had an illegal value
ValueError: On entry to ILAENV parameter number 2 had an illegal value

Anybody seen this before?


Mathew


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] why isn't libfftw.a being accessed?

2008-07-29 Thread Mathew Yeates
Hi
In my site.cfg I have

[DEFAULT]
library_dirs = /home/ossetest/lib64:/home/ossetest/lib
include_dirs = /home/ossetest/include

[fftw]
libraries = fftw3

but libfftw3.a isn't being accesed.
ls -lu ~/lib/libfftw3.a
-rw-r--r-- 1 ossetest ossetest 1572628 Jul 26 15:02 
/home/ossetest/lib/libfftw3.a

anybody know why?

Mathew




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
I'm getting this too
Ticket #652 ... ok
Ticket 662.Segmentation fault


Robert Kern wrote:
 On Tue, Jul 29, 2008 at 14:16, James Turner [EMAIL PROTECTED] wrote:
   
 I have built NumPy 1.1.0 on RedHat Enterprise 3 (Linux 2.4.21
 with gcc 3.2.3 and glibc 2.3.2) and Python 2.5.1. When I run
 numpy.test() I get a core dump, as follows. I haven't noticed
 any special errors during the build. Should I post the entire
 terminal output from python setup.py install? Maybe as an
 attachment? Let me know if I can provide any more info.
 

 Can you do

   numpy.test(verbosity=2)

 ? That will print out the name of the test before running it, so we
 will know exactly which test caused the core dump.

 A gdb backtrace would also help.

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
my set up is similar. Same cpu's. Except I am using atlas 3.9.1 and gcc 
4.2.4

James Turner wrote:
 Are you using ATLAS? If so, where did you get it and what cpu do you have?
 

 Yes. I have Atlas 3.8.2. I think I got it from
 http://math-atlas.sourceforge.net. I also included Lapack 3.1.1
 from Netlib when building it from source. This worked on another
 machine.

 According to /proc/cpuinfo, I have a quad-processor (or core?)
 Intel Xeon. It is running the Linux 2.4 kernel (I needed to build
 a load of software including NumPy with an older glibc so it will
 run on older client machines). Maybe I shouldn't use ATLAS for a
 server installation, since it won't be tuned well? We're trying
 to keep things uniform across our sites though.

 Thanks!

 James.

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
I am using an ATLAS 64 bit lapack 3.9.1.
My cpu (4 cpus)

-
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 23
model name  : Intel(R) Xeon(R) CPU   X5460  @ 3.16GHz
stepping: 6
cpu MHz : 3158.790
cache size  : 6144 KB
physical id : 0
siblings: 4
core id : 0
cpu cores   : 4
fpu : yes
fpu_exception   : yes
cpuid level : 10
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall 
nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm
bogomips: 6321.80
clflush size: 64
cache_alignment : 64
address sizes   : 38 bits physical, 48 bits virtual
power management:
---

A system trace ends with
futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  write(2, ., 1)  = 1
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  --- SIGSEGV (Segmentation fault) @ 0 (0) ---
2655  +++ killed by SIGSEGV +++

I get no core file



Robert Kern wrote:
 On Tue, Jul 29, 2008 at 14:16, James Turner [EMAIL PROTECTED] wrote:
   
 I have built NumPy 1.1.0 on RedHat Enterprise 3 (Linux 2.4.21
 with gcc 3.2.3 and glibc 2.3.2) and Python 2.5.1. When I run
 numpy.test() I get a core dump, as follows. I haven't noticed
 any special errors during the build. Should I post the entire
 terminal output from python setup.py install? Maybe as an
 attachment? Let me know if I can provide any more info.
 

 Can you do

   numpy.test(verbosity=2)

 ? That will print out the name of the test before running it, so we
 will know exactly which test caused the core dump.

 A gdb backtrace would also help.

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
more info
when /linalg.py(872)eigh() calls dsyevd I crash

James Turner wrote:
 Thanks everyone. I think I might try using the Netlib BLAS, since
 it's a server installation... but please let me know if you'd like
 me to troubleshoot this some more (the sooner the easier).

 James.

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
Charles R Harris wrote:


 This smells like an ATLAS problem.
I don't think so. I crash in a call to dsyevd which part of lapack but 
not atlas. Also, when I commented out the call to test_eigh_build I get 
zillions of errors like (look at the second one, warnings wasn't imported?)
==
ERROR: check_single (numpy.linalg.tests.test_linalg.TestSVD)
--
Traceback (most recent call last):
  File 
/home/ossetest/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py, 
line 30, in check_single
self.do(a, b)
  File 
/home/ossetest/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py, 
line 100, in do
u, s, vt = linalg.svd(a, 0)
  File 
/home/ossetest/lib/python2.5/site-packages/numpy/linalg/linalg.py, 
line 980, in svd
s = s.astype(_realType(result_t))
ValueError: On entry to DLASD0 parameter number 9 had an illegal value

==
ERROR: Tests polyfit
--
Traceback (most recent call last):
  File 
/home/ossetest/lib/python2.5/site-packages/numpy/ma/tests/test_extras.py, 
line 365, in test_polyfit
assert_almost_equal(polyfit(x,y,3),numpy.polyfit(x,y,3))
  File /home/ossetest/lib/python2.5/site-packages/numpy/ma/extras.py, 
line 882, in polyfit
warnings.warn(Polyfit may be poorly conditioned, np.RankWarning)
NameError: global name 'warnings' is not defined


You should seed a note to Clint Whaley (the ATLAS guy). IIRC, ATLAS 
 has some hand coded asm routines and it seems that support for these 
 very new processors might be broken.

 Chuck


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
oops. It is ATLAS. I was able to run with a nonoptimized lapack.

Mathew Yeates wrote:
 Charles R Harris wrote:
   
 This smells like an ATLAS problem.
 
 I don't think so. I crash in a call to dsyevd which part of lapack but 
 not atlas. Also, when I commented out the call to test_eigh_build I get 
 zillions of errors like (look at the second one, warnings wasn't imported?)
 ==
 ERROR: check_single (numpy.linalg.tests.test_linalg.TestSVD)
 --
 Traceback (most recent call last):
   File 
 /home/ossetest/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py,
  
 line 30, in check_single
 self.do(a, b)
   File 
 /home/ossetest/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py,
  
 line 100, in do
 u, s, vt = linalg.svd(a, 0)
   File 
 /home/ossetest/lib/python2.5/site-packages/numpy/linalg/linalg.py, 
 line 980, in svd
 s = s.astype(_realType(result_t))
 ValueError: On entry to DLASD0 parameter number 9 had an illegal value

 ==
 ERROR: Tests polyfit
 --
 Traceback (most recent call last):
   File 
 /home/ossetest/lib/python2.5/site-packages/numpy/ma/tests/test_extras.py, 
 line 365, in test_polyfit
 assert_almost_equal(polyfit(x,y,3),numpy.polyfit(x,y,3))
   File /home/ossetest/lib/python2.5/site-packages/numpy/ma/extras.py, 
 line 882, in polyfit
 warnings.warn(Polyfit may be poorly conditioned, np.RankWarning)
 NameError: global name 'warnings' is not defined


   
You should seed a note to Clint Whaley (the ATLAS guy). IIRC, ATLAS 
 has some hand coded asm routines and it seems that support for these 
 very new processors might be broken.

 Chuck


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   
 


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
What got fixed?

Robert Kern wrote:
 On Tue, Jul 29, 2008 at 17:41, Mathew Yeates [EMAIL PROTECTED] wrote:
   
 Charles R Harris wrote:
 
 This smells like an ATLAS problem.
   
 I don't think so. I crash in a call to dsyevd which part of lapack but
 not atlas. Also, when I commented out the call to test_eigh_build I get
 zillions of errors like (look at the second one, warnings wasn't imported?)
 

 Fixed in SVN.

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] weird indexing

2008-01-03 Thread Mathew Yeates
Hi
Okay, here's a weird one. In Fortran you can specify the upper/lower 
bounds of an array
e.g. REAL A(3:7)

What would be the best way to translate this to a Numpy array? I would 
like to do something like
A=numpy.zeros(shape=(5,))
and have the expression A[3] actually return A[0].

Or something. Any thoughts?

Mathew


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I list all combinations

2007-12-26 Thread Mathew Yeates
Which reference manual?

René Bastian wrote:
 Le Mercredi 26 Décembre 2007 21:22, Mathew Yeates a écrit :
   
 Hi
 I've been looking at fromfunction and itertools but I'm flummoxed.

 I have an arbitrary number of lists. I want to form all possible
 combinations from all lists. So if
 r1=[dog,cat]
 r2=[1,2]

 I want to return [[dog,1],[dog,2],[cat,1],[cat,2]]

 It's obvious when the number of lists is not arbitrary. But what if
 thats not known until runtime?

 Mathew
 

 In the 'Reference Manual' of Python there is an example.

   
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
 

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I list all combinations

2007-12-26 Thread Mathew Yeates
yes, I came up with this and may use it. Seems like it would be insanely 
slow but my problem is small enough that it might be okay.

Thanks


Keith Goodman wrote:
 On Dec 26, 2007 12:22 PM, Mathew Yeates [EMAIL PROTECTED] wrote:
   
 I have an arbitrary number of lists. I want to form all possible
 combinations from all lists. So if
 r1=[dog,cat]
 r2=[1,2]

 I want to return [[dog,1],[dog,2],[cat,1],[cat,2]]

 It's obvious when the number of lists is not arbitrary. But what if
 thats not known until runtime?
 

 Would this work?

 Make a function that takes two inputs (a list of lists and a list) and
 returns a list of lists that contains all possible combinations.
 Iterate through all lists by calling the function with the output of
 the previous call (a list of lists) and the next list.
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I list all combinations

2007-12-26 Thread Mathew Yeates
Thanks Chuck.

Charles R Harris wrote:


 On Dec 26, 2007 2:30 PM, Charles R Harris [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:



 On Dec 26, 2007 1:45 PM, Keith Goodman [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 On Dec 26, 2007 12:22 PM, Mathew Yeates [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
  I have an arbitrary number of lists. I want to form all possible
  combinations from all lists. So if
  r1=[dog,cat]
  r2=[1,2]
 
  I want to return [[dog,1],[dog,2],[cat,1],[cat,2]]
 
  It's obvious when the number of lists is not arbitrary. But
 what if
  thats not known until runtime?

 Would this work?

 Make a function that takes two inputs (a list of lists and a
 list) and
 returns a list of lists that contains all possible combinations.
 Iterate through all lists by calling the function with the
 output of
 the previous call (a list of lists) and the next list.
 


 Yeah, you can do it with recursion, but I don't think it would be
 quite as efficient. An example of the explicit approach, define
 the following generator:

 def count(listoflists) :
 counter = [i[0] for i in listoflists]


 Make that counter = [0 for i in listoflists]. That bug slipped in 
 going from [0]*len(listoflists).

 Chuck

 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] vectorizing loops

2007-10-25 Thread Mathew Yeates
Anybody know of any tricks for handling something like

z[0]=1.0
for i in range(100):
out[i]=func1(z[i])
z[i+1]=func2(out[i])

??
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] pycdf probs

2007-09-12 Thread Mathew Yeates
Anybody know how to contact the pycdf author? His name is Gosselin I 
think. There are hardcoded values that cause pycdf to segfault when 
using large strings.

Mathew


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
Hi
When I try
import numpy
id(numpy.dot) == id(numpy.core.multiarray.dot)

I get True. But I have liblapck.a installed in ~/lib and I put the lines
[DEFAULT]
library_dirs = /home/myeates/lib
include_dirs = /home/myeates/include

in site.cfg
In fact, when I build and run a sytem trace I see that liblapack.a is 
being accessed.

Any ideas?
Mathew

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
yes, I get
from numpy.core import _dotblas
ImportError: No module named multiarray

Now what?
uname -a
Linux 2.6.9-55.0.2.EL #1 Tue Jun 12 17:47:10 EDT 2007 i686 athlon i386 
GNU/Linux


Robert Kern wrote:
 Mathew Yeates wrote:
   
 Hi
 When I try
 import numpy
 id(numpy.dot) == id(numpy.core.multiarray.dot)

 I get True. But I have liblapck.a installed in ~/lib and I put the lines
 [DEFAULT]
 library_dirs = /home/myeates/lib
 include_dirs = /home/myeates/include

 in site.cfg
 In fact, when I build and run a sytem trace I see that liblapack.a is 
 being accessed.

 Any ideas?
 

 It is possible that you have a linking problem with _dotblas.so. On some
 systems, such a problem will only manifest itself at run-time, not build-time.
 At runtime, you will get an ImportError, which we catch because that's also 
 the
 error one gets if the _dotblas is legitimately absent.

 Try importing _dotblas by itself to see the error message.


 In [8]: from numpy.core import _dotblas


 Most likely you are missing the appropriate libblas, too, since you don't
 mention it.

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
yes

Robert Kern wrote:
 Mathew Yeates wrote:
   
 yes, I get
 from numpy.core import _dotblas
 ImportError: No module named multiarray
 

 That's just weird. Can you import numpy.core.multiarray by itself?

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
oops. sorry
from numpy.core import _dotblas
ImportError: 
/home/myeates/lib/python2.5/site-packages/numpy/core/_dotblas.so: 
undefined symbol: cblas_zaxpy


Robert Kern wrote:
 Mathew Yeates wrote:
   
 yes, I get
 from numpy.core import _dotblas
 ImportError: No module named multiarray
 

 That's just weird. Can you import numpy.core.multiarray by itself?

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
my site,cfg just is
[DEFAULT]
library_dirs = /home/myeates/lib
include_dirs = /home/myeates/include

python setup.py config gives
F2PY Version 2_3979
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /home/myeates/lib
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /home/myeates/lib
  NOT AVAILABLE

atlas_blas_info:
  libraries f77blas,cblas,atlas not found in /home/myeates/lib
  NOT AVAILABLE

blas_info:
  FOUND:
libraries = ['blas']
library_dirs = ['/home/myeates/lib']
language = f77

  FOUND:
libraries = ['blas']
library_dirs = ['/home/myeates/lib']
define_macros = [('NO_ATLAS_INFO', 1)]
language = f77

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  libraries mkl,vml,guide not found in /home/myeates/lib
  NOT AVAILABLE

  NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /home/myeates/lib
  libraries lapack_atlas not found in /home/myeates/lib
numpy.distutils.system_info.atlas_threads_info
  NOT AVAILABLE

atlas_info:
  libraries f77blas,cblas,atlas not found in /home/myeates/lib
  libraries lapack_atlas not found in /home/myeates/lib
numpy.distutils.system_info.atlas_info
  NOT AVAILABLE

lapack_info:
  FOUND:
libraries = ['lapack']
library_dirs = ['/home/myeates/lib']
language = f77

  FOUND:
libraries = ['lapack', 'blas']
library_dirs = ['/home/myeates/lib']
define_macros = [('NO_ATLAS_INFO', 1)]
language = f77

running config


Robert Kern wrote:
 Mathew Yeates wrote:
   
 oops. sorry
 from numpy.core import _dotblas
 ImportError: 
 /home/myeates/lib/python2.5/site-packages/numpy/core/_dotblas.so: 
 undefined symbol: cblas_zaxpy
 

 Okay, yes, that's the problem. liblapack depends on libblas. Make sure that 
 you
 specify one to use. Follow the directions in site.cfg.example. If you need 
 more
 help, please tell us what libraries you are using, your full site.cfg and the
 output of

   $ python setup.py config

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
more info. My blas library has zaxpy defined but not  cblas_zaxpy

Mathew Yeates wrote:
 my site,cfg just is
 [DEFAULT]
 library_dirs = /home/myeates/lib
 include_dirs = /home/myeates/include

 python setup.py config gives
 F2PY Version 2_3979
 blas_opt_info:
 blas_mkl_info:
   libraries mkl,vml,guide not found in /home/myeates/lib
   NOT AVAILABLE

 atlas_blas_threads_info:
 Setting PTATLAS=ATLAS
   libraries ptf77blas,ptcblas,atlas not found in /home/myeates/lib
   NOT AVAILABLE

 atlas_blas_info:
   libraries f77blas,cblas,atlas not found in /home/myeates/lib
   NOT AVAILABLE

 blas_info:
   FOUND:
 libraries = ['blas']
 library_dirs = ['/home/myeates/lib']
 language = f77

   FOUND:
 libraries = ['blas']
 library_dirs = ['/home/myeates/lib']
 define_macros = [('NO_ATLAS_INFO', 1)]
 language = f77

 lapack_opt_info:
 lapack_mkl_info:
 mkl_info:
   libraries mkl,vml,guide not found in /home/myeates/lib
   NOT AVAILABLE

   NOT AVAILABLE

 atlas_threads_info:
 Setting PTATLAS=ATLAS
   libraries ptf77blas,ptcblas,atlas not found in /home/myeates/lib
   libraries lapack_atlas not found in /home/myeates/lib
 numpy.distutils.system_info.atlas_threads_info
   NOT AVAILABLE

 atlas_info:
   libraries f77blas,cblas,atlas not found in /home/myeates/lib
   libraries lapack_atlas not found in /home/myeates/lib
 numpy.distutils.system_info.atlas_info
   NOT AVAILABLE

 lapack_info:
   FOUND:
 libraries = ['lapack']
 library_dirs = ['/home/myeates/lib']
 language = f77

   FOUND:
 libraries = ['lapack', 'blas']
 library_dirs = ['/home/myeates/lib']
 define_macros = [('NO_ATLAS_INFO', 1)]
 language = f77

 running config


 Robert Kern wrote:
   
 Mathew Yeates wrote:
   
 
 oops. sorry
 from numpy.core import _dotblas
 ImportError: 
 /home/myeates/lib/python2.5/site-packages/numpy/core/_dotblas.so: 
 undefined symbol: cblas_zaxpy
 
   
 Okay, yes, that's the problem. liblapack depends on libblas. Make sure that 
 you
 specify one to use. Follow the directions in site.cfg.example. If you need 
 more
 help, please tell us what libraries you are using, your full site.cfg and the
 output of

   $ python setup.py config

   
 


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
I'm the one who created libblas.a so I must have done something wrong. 
This is lapack-3.1.1.


Robert Kern wrote:
 If your BLAS just the reference BLAS, don't bother with _dotblas. It won't be
 any faster than the default implementation in numpy. You only get a win if you
 are using an accelerated BLAS with the CBLAS interface for C-style row-major
 matrices. Your libblas does not seem to be such an accelerated BLAS.

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
Thanks Robert
I have a deadline and don't have time to install ATLAS. Instead I'm 
installing clapack. Is this the corrrect thing to do?

Mathew

Robert Kern wrote:
 Mathew Yeates wrote:
   
 I'm the one who created libblas.a so I must have done something wrong. 
 This is lapack-3.1.1.
 

 No, you didn't do anything wrong, per se, you just built the reference F77 
 BLAS.
 It's not an accelerated BLAS, so there's no point in using it with numpy.
 There's not way you *can* build it to be an accelerated BLAS.

 If you want an accelerated BLAS, try to use ATLAS:

   http://math-atlas.sourceforge.net/

 It is possible that your Linux distribution, whatever it is, already has a 
 build
 of it for you.

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] gesdd hangs

2007-08-29 Thread Mathew Yeates
I guess I can't blame lapack. My system has atlas so I recompiled numpy 
pointing to atlas. Now

id(numpy.dot) == id(numpy.core.multiarray.dot) is False

However when I run decomp.svd on a 25 by 25 identity matrix, it hangs when 
gesdd is called (line 501 of linalag/decomp.py)

Anybody else seeing this?

Mathew




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] gesdd hangs

2007-08-29 Thread Mathew Yeates
never returns


Charles R Harris wrote:


 On 8/29/07, *Mathew Yeates* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 I guess I can't blame lapack. My system has atlas so I recompiled
 numpy
 pointing to atlas. Now

 id(numpy.dot) == id(numpy.core.multiarray.dot) is False

 However when I run decomp.svd on a 25 by 25 identity matrix, it
 hangs when gesdd is called (line 501 of linalag/decomp.py)

 Anybody else seeing this?


 What do you mean by hang?

 Chuck


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] how do I configure with gfortran

2007-06-30 Thread Mathew Yeates
Does anyone know how to run
python setup.py build
and have gfortran used? It is in my path.

Mathew

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I configure with gfortran

2007-06-30 Thread Mathew Yeates
result
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found

Something is *broken*.


Robert Kern wrote:
 Mathew Yeates wrote:
   
 Does anyone know how to run
 python setup.py build
 and have gfortran used? It is in my path.
 

 python setup.py config_fc --fcompiler=gnu95 build

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I configure with gfortran

2007-06-30 Thread Mathew Yeates
Thanks for anyones help. I've been trying to figure this out for some 
time now. Stepping through distutils code is a bummer.

-bash-3.1$ uname -a
Linux mu.jpl.nasa.gov 2.6.17-5mdv #1 SMP Wed Sep 13 14:28:02 EDT 2006 
x86_64 Dual-Core AMD Opteron(tm) Processor 2220 SE GNU/Linux

-bash-3.1$ gfortran
gfortran: no input files
-bash-3.1$ which gfortran
/u/vento0/myeates/bin/gfortran
-bash-3.1$ gfortran -v
Using built-in specs.
Target: x86_64-unknown-linux-gnu
Configured with: ../gcc-4.2.0/configure --with-mpfr=/u/vento0/myeates/ 
--with-gmp=/u/vento0/myeates/ --enable-languages=c,fortran 
--prefix=/u/vento0/myeates
Thread model: posix
gcc version 4.2.0

 -bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 21 |tee out








Robert Kern wrote:
 Mathew Yeates wrote:
   
 result
 Found executable /usr/bin/g77
 gnu: no Fortran 90 compiler found

 Something is *broken*.
 

 Then please provide us with enough information to help you. What platform are
 you on? Exactly what command did you execute? Exactly what output did you get
 (please copy-and-paste or redirect the output to a file)?

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I configure with gfortran

2007-06-30 Thread Mathew Yeates
No.
My PC crashed. I swear I have a virus on this machine. Been that kinda 
weekend

Not particularly illuminating but here it is:
Running from numpy source directory.
F2PY Version 2_3875
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /u/vento0/myeates/lib
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
Setting PTATLAS=ATLAS
Setting PTATLAS=ATLAS
  FOUND:
libraries = ['ptf77blas', 'ptcblas', 'atlas']
library_dirs = ['/u/vento0/myeates/lib']
language = c
include_dirs = ['/u/vento0/myeates/include']

customize GnuFCompiler
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler using config
compiling '_configtest.c':

Robert Kern wrote:
 Mathew Yeates wrote:

   
 -bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 21 |tee out
   

 Did you forget to attach a file?

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I configure with gfortran

2007-06-30 Thread Mathew Yeates
Even more info!
I am using numpy gotten from svn on Wed or Thurs.

Mathew Yeates wrote:
 More info:
 I tried Chris' suggestion , i.e. export F77=gfortran

 And now I get

 Found executable /u/vento0/myeates/bin/gfortran
 gnu: no Fortran 90 compiler found
 Found executable /usr/bin/g77




 Mathew Yeates wrote:
   
 No.
 My PC crashed. I swear I have a virus on this machine. Been that kinda 
 weekend

 Not particularly illuminating but here it is:
 Running from numpy source directory.
 F2PY Version 2_3875
 blas_opt_info:
 blas_mkl_info:
   libraries mkl,vml,guide not found in /u/vento0/myeates/lib
   NOT AVAILABLE

 atlas_blas_threads_info:
 Setting PTATLAS=ATLAS
 Setting PTATLAS=ATLAS
 Setting PTATLAS=ATLAS
   FOUND:
 libraries = ['ptf77blas', 'ptcblas', 'atlas']
 library_dirs = ['/u/vento0/myeates/lib']
 language = c
 include_dirs = ['/u/vento0/myeates/include']

 customize GnuFCompiler
 Found executable /usr/bin/g77
 gnu: no Fortran 90 compiler found
 gnu: no Fortran 90 compiler found
 customize GnuFCompiler
 gnu: no Fortran 90 compiler found
 gnu: no Fortran 90 compiler found
 customize GnuFCompiler using config
 compiling '_configtest.c':

 Robert Kern wrote:
   
 
 Mathew Yeates wrote:

   
 
   
 -bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 21 |tee out
   
 
   
 Did you forget to attach a file?

   
 
   
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

 __ NOD32 2365 (20070630) Information __

 This message was checked by NOD32 antivirus system.
 http://www.eset.com



   
 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

 __ NOD32 2365 (20070630) Information __

 This message was checked by NOD32 antivirus system.
 http://www.eset.com



   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] baffled by gfortran

2007-06-28 Thread Mathew Yeates
I have gfortran installed in my path. But when I run python setup.py 
build I get
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found

The output of python setup.py config_fc --help-fcompiler shows that 
gfortran is visible
Gnu95FCompiler instance properties:
  archiver= ['/u/vento0/myeates/bin/gfortran', '-cr']
  compile_switch  = '-c'
  compiler_f77= ['/u/vento0/myeates/bin/gfortran', '-Wall', '-ffixed-
form', '-fno-second-underscore', '-fPIC', '-O3', 
'-funroll
-loops', '-march=opteron', '-mmmx', '-m3dnow', 
'-msse2', '
-msse']
  compiler_f90= ['/u/vento0/myeates/bin/gfortran', '-Wall', '-fno-second
-underscore', '-fPIC', '-O3', '-funroll-loops', '-


I have  tried
python setup.py config_fc --fcompiler=gnu95 build etc

but I can't figure it out. Can some examples be added to the readme file?
 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion