Re: [Numpy-discussion] Zoom fft code

2009-01-05 Thread Stéfan van der Walt
Hi Nadav

I recall that you posted an implementation yourself a while ago!

http://www.mail-archive.com/numpy-discussion@scipy.org/msg01812.html

Regards
Stéfan

2009/1/5 Nadav Horesh nad...@visionsense.com:

  I am looking for a zoom fft code. I found an old code by Paule Kinzle (a 
 matlab code with a translation to numarray), but its 2D extension (czt1.py) 
 looks buggy.

  Nadav.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] unique1d and asarray

2009-01-05 Thread Robert Cimrman
Pierre GM wrote:
 On Jan 4, 2009, at 4:47 PM, Robert Kern wrote:
 
 On Sun, Jan 4, 2009 at 15:44, Pierre GM pgmdevl...@gmail.com wrote:
 If we used np.asanyarray instead, subclasses are recognized properly,
 the mask is recognized by argsort and the result correct.
 Is there a reason why we use np.asarray instead of np.asanyarray ?
 Probably not.
 
 So there wouldn't be any objections to make the switch ? We can wait a  
 couple of days if anybody has a pb with that...

There are probably other functions in arraysetops that could be fixed 
easily to work with masked arrays, feel free to do it if you like. I 
have never worked with the masked arrays, so the np.asarray problem had 
not come to my mind. Also, if you change np.asarray to np.asanyarray, 
add a corresponding test emplying the masked arrays to 
test_arraysetops.py, please.

cheers  thanks,
r.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zoom fft code

2009-01-05 Thread Nadav Horesh
Thank you, I lost the code so thank you for finding it. In addition, chirp z 
transform is broader then zoom fft. There was someone on this list that was 
interested especially in zoom fft, so I was wondered if there is a code for it. 
Anyway, I can use my old code again.

   Nadav

  


-הודעה מקורית-
מאת: numpy-discussion-boun...@scipy.org בשם St?fan van der Walt
נשלח: ב 05-ינואר-09 10:25
אל: Discussion of Numerical Python
נושא: Re: [Numpy-discussion] Zoom fft code
 
Hi Nadav

I recall that you posted an implementation yourself a while ago!

http://www.mail-archive.com/numpy-discussion@scipy.org/msg01812.html

Regards
St?fan

2009/1/5 Nadav Horesh nad...@visionsense.com:

  I am looking for a zoom fft code. I found an old code by Paule Kinzle (a 
 matlab code with a translation to numarray), but its 2D extension (czt1.py) 
 looks buggy.

  Nadav.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help with typemapping a C function to use numpy arrays

2009-01-05 Thread Rich E
Egor,

Thanks for the help.  I think I want to leave the C code as-is
however, as it is perfectly fine there no knowing 'sizeOutMag' because
it can deduce both array sizes from one variable.  There are many
other similar cases in my code (many where the size of the array is
known by a member of a structure passed to the function).

Maybe I should look into using an 'insertion block' of code in the
interface file, instead of trying to typemap the array?  I am thinking
I may just be able to copy the generated code (from SWIG) into my
interface file to do this, but I have not tried it yet.

I will experiment a little and post again.  Thanks and happy holidays!

regards,
Rich

On Mon, Jan 5, 2009 at 10:42 AM, Egor Zindy ezi...@gmail.com wrote:
 Hello Rich,

 sorry it took so long to answer back, holidays and all :-)

 That's exactly the kind of SWIG / numpy.i problems I've been working on over
 the past few months: How to generate an array you don't know the size of
 a-priori, and then handle the memory deallocation seamlessly. In your case,
 you know that the output array will be half the size of the input array, but
 this falls under the more general case of not knowing the output size
 a-priori.

 Have a look at the files attached. I've rewritten your function header as:
 void sms_spectrumMag( int sizeInMag, float *pInRect, int *sizeOutMag, float
 **pOutMag);

 Easy to see what the input and output arrays are now. Then my numpy.i
 handles the memory deallocation of the **pOutMag array.

 I've actually moved my numpy.i explanations to the scipy/numpy cookbook last
 week :-)
 http://www.scipy.org/Cookbook/SWIG_Memory_Deallocation

 Hope it all makes sense. If you have any questions, don't hesitate!

python test_dftmagnitude.py
 [1, 1, 2, 2]
 [ 1.41421354  2.82842708]
 [1, 1, 2, 2, 3, 3, 4, 4]
 [ 1.41421354  2.82842708  4.2426405   5.65685415]
 [1, 1, 2, 2, 3, 3, 4, 4, 5, 5]
 [ 1.41421354  2.82842708  4.2426405   5.65685415  7.07106781]

 Regards,
 Egor

 On Wed, Dec 24, 2008 at 1:52 AM, Rich E reakina...@gmail.com wrote:

 Hi list,

 My question has to do with the Numpy/SWIG typemapping system.

 I recently got the typemaps in numpy.i to work on most of my C
 functions that are wrapped using SWIG, if they have arguments of the
 form (int sizeArray, float *pArray).

 Now I am trying to figure out how to wrap function that aren't of the
 form, such as the following function:

 /*! \brief compute magnitude spectrum of a DFT
  *
  * \param sizeMag  size of output Magnitude (half of input
 real FFT)
  * \param pFReal   pointer to input FFT real array
 (real/imag floats)
  * \param pFMAgpointer to float array of magnitude spectrum
  */
 void sms_spectrumMag( int sizeMag, float *pInRect, float *pOutMag)
 {
   int i, it2;
   float fReal, fImag;

   for (i=0; isizeMag; i++)
   {
   it2 = i  1;
   fReal = pInRect[it2];
   fImag = pInRect[it2+1];
   pOutMag[i] = sqrtf(fReal * fReal + fImag * fImag);
   }
 }

 There are two arrays, one is half the size of the other.  But, SWIG
 doesn't know this, according to the type map it will think *pInRect is
 of size sizeMag and will not know anything about *pOutMag.

 Ideally in python, I would like to call the function as
 sms_spectrumMag(nArray1, nArray2), where nArray1 is twice the size of
 nArray2, and nArray2 is of size sizeMag.

 I think in order to do this (although if someone has a better
 suggestion, I am open to it), I will have to modify the typemap in
 order to tell SWIG how to call the C function properly.  I do not want
 to have to edit the wrapped C file every time it is regenerated from
 the interface file.


 Here is a start I made with the existing typemap code in numpy.i (not
 working):

 /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1)
  */
 %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY,
  fragment=NumPy_Macros)
  (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1)
 {
  $1 = is_array($input)  PyArray_EquivTypenums(array_type($input),
DATA_TYPECODE);
 }
 %typemap(in,
fragment=NumPy_Fragments)
  (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1)
  (PyArrayObject* array=NULL, int i=0)
 {
  array = obj_to_array_no_conversion($input, DATA_TYPECODE);
  if (!array || !require_dimensions(array,1) || !require_contiguous(array)
 || !require_native(array)) SWIG_fail;
  $1 = 1;
  for (i=0; i  array_numdims(array); ++i) $1 *= array_size(array,i);
  $2 = (DATA_TYPE*) array_data(array);
 }

 and try to alter it to allow for a conversion of type:
 (DIM_TYPE DIM1, DATA_TYPE* ARRAY1, DATA_TYPE* ARRAY2)
 where ARRAY1 is size DIM1 * 2 and  ARRAY2 is size DIM1.  Then I can
 %apply this to my function that I mentioned in the last post.

 So here are my first two questions:

 1) where is DIM1 used to declare the array size?  I don't see where it
 is used at all, and I need to somewhere multiply it by 2 to declare
 the size of 

Re: [Numpy-discussion] Zoom fft code

2009-01-05 Thread Stéfan van der Walt
2009/1/5 Neal Becker ndbeck...@gmail.com:
 I was not aware that chirp-z transform can be used to efficiently compute DFT 
 over a limited part of the spectrum.  I could use this.  Any references on 
 this technique?

The only reference I have is the one mentioned in the source:

Rabiner, L.R., R.W. Schafer and C.M. Rader.
The Chirp z-Transform Algorithm.
IEEE Transactions on Audio and Electroacoustics, AU-17(2):86--92, 1969

The discrete z-transform,

X(z_k) = \sum_{n=0}^{N-1} x_n z^{-n}

is calculated at M points,

z_k = AW^-k, k = 0,1,...,M-1.

You can think of the z_k's as a spiral, where A controls the outside
radius (starting frequency) and W the rate of inward spiralling.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] when will osx linker option -bundle be reflected in distutils

2009-01-05 Thread Garry Willgoose
  I was just wondering what plans there were to reflect the different
  linker options (i.e. -bundle instead of -shared) that are required  
 on
  OSX in the fcompiler files within distutils. While its a minor thing
  it always catches the users of my software when they either install
  fresh or update numpy ... and sometimes on a bad day it even catches
  me ;-)

 I'm sorry; I don't follow. What problems are you having? --
 -- Robert Kern

---

OK for example the distribution g95.py in distutils/fcompiler has the  
following code

executables = {
'version_cmd'  : [g95, --version],
'compiler_f77' : [g95, -ffixed-form],
'compiler_fix' : [g95, -ffixed-form],
'compiler_f90' : [g95],
'linker_so': [g95,-shared],
'archiver' : [ar, -cr],
'ranlib'   : [ranlib]
}

For osx you need to modify it to

  executables = {
'version_cmd'  : [g95, --version],
'compiler_f77' : [g95, -ffixed-form],
'compiler_fix' : [g95, -ffixed-form],
'compiler_f90' : [g95],
'linker_so': [g95,-shared],
'archiver' : [ar, -cr],
'ranlib'   : [ranlib]
}
import sys
if sys.platform.lower() == 'darwin':
executables[linker_so'] = [g95,-Wall -bundle]

The 'shared' option is not implemented in the osx linker. Not sure  
what the underlying difference between 'shared' and 'bundle' is but  
this substitution is necessary and this has been working for me for  
the last year or so. You also need the -Wall but for reasons that  
completely escape me.

The same goes for gfortran and intel (both of which I use) and I  
assume the other compilers that are available for OSX.



Prof Garry Willgoose,
Australian Professorial Fellow in Environmental Engineering,
Director, Centre for Climate Impact Management (C2IM),
School of Engineering, The University of Newcastle,
Callaghan, 2308
Australia.

Centre webpage: www.c2im.org.au

Phone: (International) +61 2 4921 6050 (Tues-Fri AM); +61 2 6545 9574  
(Fri PM-Mon)
FAX: (International) +61 2 4921 6991 (Uni); +61 2 6545 9574 (personal  
and Telluric)
Env. Engg. Secretary: (International) +61 2 4921 6042

email:  garry.willgo...@newcastle.edu.au; g.willgo...@telluricresearch.com
email-for-life: garry.willgo...@alum.mit.edu
personal webpage: www.telluricresearch.com/garry

Do not go where the path may lead, go instead where there is no path  
and leave a trail
   Ralph Waldo Emerson






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] when will osx linker option -bundle be reflected in distutils

2009-01-05 Thread Robert Kern
On Mon, Jan 5, 2009 at 18:48, Garry Willgoose
garry.willgo...@newcastle.edu.au wrote:
  I was just wondering what plans there were to reflect the different
  linker options (i.e. -bundle instead of -shared) that are required
 on
  OSX in the fcompiler files within distutils. While its a minor thing
  it always catches the users of my software when they either install
  fresh or update numpy ... and sometimes on a bad day it even catches
  me ;-)

 I'm sorry; I don't follow. What problems are you having? --
 -- Robert Kern

 ---

 OK for example the distribution g95.py in distutils/fcompiler has the
 following code

executables = {
'version_cmd'  : [g95, --version],
'compiler_f77' : [g95, -ffixed-form],
'compiler_fix' : [g95, -ffixed-form],
'compiler_f90' : [g95],
'linker_so': [g95,-shared],
'archiver' : [ar, -cr],
'ranlib'   : [ranlib]
}

 For osx you need to modify it to

  executables = {
'version_cmd'  : [g95, --version],
'compiler_f77' : [g95, -ffixed-form],
'compiler_fix' : [g95, -ffixed-form],
'compiler_f90' : [g95],
'linker_so': [g95,-shared],
'archiver' : [ar, -cr],
'ranlib'   : [ranlib]
}
import sys
if sys.platform.lower() == 'darwin':
executables[linker_so'] = [g95,-Wall -bundle]

 The 'shared' option is not implemented in the osx linker. Not sure
 what the underlying difference between 'shared' and 'bundle' is but
 this substitution is necessary and this has been working for me for
 the last year or so. You also need the -Wall but for reasons that
 completely escape me.

-Wall absolutely should not affect anything except adding warning
messages. I suspect something else is getting modified when you do
that.

 The same goes for gfortran and intel (both of which I use) and I
 assume the other compilers that are available for OSX.

I've been building scipy for years with gfortran and an unmodified
numpy on OS X. The correct switches are added in the
get_flags_linker_so() method:

def get_flags_linker_so(self):
opt = self.linker_so[1:]
if sys.platform=='darwin':
# MACOSX_DEPLOYMENT_TARGET must be at least 10.3. This is
# a reasonable default value even when building on 10.4 when using
# the official Python distribution and those derived from it (when
# not broken).
target = os.environ.get('MACOSX_DEPLOYMENT_TARGET', None)
if target is None or target == '':
target = '10.3'
major, minor = target.split('.')
if int(minor)  3:
minor = '3'
warnings.warn('Environment variable '
'MACOSX_DEPLOYMENT_TARGET reset to %s.%s' % (major, minor))
os.environ['MACOSX_DEPLOYMENT_TARGET'] = '%s.%s' % (major,
minor)

opt.extend(['-undefined', 'dynamic_lookup', '-bundle'])
else:
opt.append(-shared)
if sys.platform.startswith('sunos'):
# SunOS often has dynamically loaded symbols defined in the
# static library libg2c.a  The linker doesn't like this.  To
# ignore the problem, use the -mimpure-text flag.  It isn't
# the safest thing, but seems to work. 'man gcc' says:
# .. Instead of using -mimpure-text, you should compile all
#  source code with -fpic or -fPIC.
opt.append('-mimpure-text')
return opt

If this is not working for you, please show me the error messages you get.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: PyMC 2.0

2009-01-05 Thread Christopher Fonnesbeck
Numpy list members,

It gives me great pleasure to be able to announce the long-awaited  
release of PyMC 2.0. Platform-specific installers have been uploaded  
to the Google Code page (Mac OSX) and the Python Package Index (all  
other platforms), along with the new user's guide 
(http://pymc.googlecode.com/files/UserGuide2.0.pdf 
).

PyMC is a python module that implements Bayesian statistical models  
and fitting algorithms, including Markov chain Monte Carlo. Its  
flexibility makes it applicable to a large suite of problems as well  
as easily extensible. Along with core sampling functionality, PyMC  
includes methods for summarizing output, plotting, goodness-of-fit and  
convergence diagnostics.

PyMC 2.0 is a quantum leap from the 1.3 release. It includes a  
completely revised object model and syntax, more efficient log- 
probability computation, a variety of specialised MCMC algorithms, and  
an expanded set of optimised probability distributions. As a result,  
models built for previous versions of PyMC will not run under version  
2.0.

I would like to particularly thank Anand Patil and David Huard, who  
have done most of the work on this version, and to all the users who  
have sent questions, comments and bug reports over the past year or  
two. Please keep the feedback coming!

Please report any problems with the release to the issues page 
(http://code.google.com/p/pymc/issues/list 
).

Python Package Index: http://pypi.python.org/pypi/pymc/
Google Code: http://pymc.googelcode.com
Mailing List: http://groups.google.com/group/pymc

Happy new year,
Chris
--
Christopher J. Fonnesbeck
Department of Mathematics and Statistics
University of Otago, PO Box 56
Dunedin, New Zealand

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] when will osx linker option -bundle be reflected in distutils

2009-01-05 Thread Daniel Macks
On Tue, Jan 06, 2009 at 11:48:57AM +1100, Garry Willgoose wrote:
 The 'shared' option is not implemented in the osx linker. Not sure  
 what the underlying difference between 'shared' and 'bundle' is

To answer this narrow part of the question, -shared is the way to
build shared libraries on linux (I think it's part of the standard GNU
ld and/or ELF binary format), and is how one builds all sorts of .so.
On OS X, there is a difference between a dynamic library (one that
is linked later via -lFOO flags, standard extension .dylib) and a
loadable module (one that is loaded at runtime via dlopen() or
similar methods, often extension .so). Linux doesn't have as sharp a
distinction. OS X linker uses different flags to specify which one to
build (-dynamiclib and -bundle, respectively).

dan

-- 
Daniel Macks
dma...@netspace.org
http://www.netspace.org/~dmacks

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion