Thank you for the input!
It sounds like Fourier methods will be fastest, by design, for sample
counts of hundreds to thousands.
I currently do steps like:
Im1 = get_stream_array_data()
Im2 = load_template_array_data(fh2)
##note: len(im1)==len(im2)
Ffft_im1=fftpack.rfft(Im1)
At 03:28 PM 3/3/2008, Ann wrote:
Sounds familiar. If you have a good signal-to-noise ratio, you can get
subpixel accuracy by oversampling the irfft, or better but slower, by
using numerical optimization to refine the peak you found with argmax.
the S/N here is poor, and high data rates work
I'm trying to figure out what numpy.correlate does, and, what are
people using to calculate the phase shift of 1D signals?
(I coded on routine that uses rfft, conjugate, ratio, irfft, and
argmax based on a paper by Hongjie Xie An IDL/ENVI implementation
of the FFT Based Algorithm for
At 01:24 PM 3/3/2008, you wrote:
If you use 'same' or 'full' you'll end of with different
amounts of offset. I imagine that this is due to the way the data is padded.
The offset should be deterministic based on the mode and the size of the
data, so it should be straightforward to compensate
://effbot.org/zone/pil-changes-116.htm
frombuffer, fromstring, fromarray, tostring etc.
http://www.pythonware.com/library/pil/handbook/image.htm
(I've used them for home astronomy projects, myself.)
Ray Schumacher
Blue Cove Interactive
No virus found in this outgoing message.
Checked by AVG Free
We've just built Python 2.4 in the lab with Intel ICC and MS VS 2005,
but had problems building 2.5, and also numpy, with the MKL.
Would someone be willing to share their build experience or project files?
Ray
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version:
happened to pyGA?
GA http://www.emsl.pnl.gov/docs/global/ is still around.
http://www.ece.lsu.edu/jxr/pohll-02/papers/jarek.pdf
Best,
Ray Schumacher
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.17.13/1211 - Release Date: 1/6/2008
a
poster who offered help. When the company actually purchases the
product I'd be glad to do it on 2-3 targets if someone can assist
with the parameters. We have one consultant here who has done it on Linux.
Ray Schumacher
Congitive Vision
8580 Production Ave., Suite B
San Diego, CA 92121
858.578.2778
At 10:57 PM 11/1/2007, Charles R Harris wrote:
An additional complication is that I pass the numpy (or Numeric)
array address to the ctypes library call so that the data is placed
directly into the array from the call. I use the if/else end wrap
logic to determine whether I need to do a
At 11:55 PM 10/31/2007, Travis wrote:
Ray S wrote:
I am using
fftRes = abs(fft.rfft(data_array[end-2**15:end]))
At first glance, I would say that I don't expect memory to be growing
here, so it looks like a problem with rfft that deserves looking into.
I saw that Numeric did also (I still
and laying a numpy
array on top was prone to that in experimentation. But I had the same
issue as Mark Heslep
http://aspn.activestate.com/ASPN/Mail/Message/ctypes-users/3192422
of creating a numpy array from a raw address (not a c_array).
Thanks,
Ray Schumacher
Geoffrey Zhu wrote:
Hi,
I am about to write a C extension module. C functions in the module will
take and return numpy arrays. I found a tutorial online, but I am not
sure about the following:
I agree with others that ctypes might be your best path.
The codeGenerator is magic, if you
Andrew added:
I'll pitch in a few donuts (and
my eternal gratitude) for an example of
shared memory use using numpy arrays that is cross platform, or at
least works in linux, mac, and windows.
I thought that getting the address from the buffer() of the array and
creating a new one from it in
I'm still curious about the licensing aspects of using Intel's
compiler and libs. Is the compiled Python/numpy result distributable,
like any other compiled program?
Ray
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
Thanks Rex,
I'll give it a try next week.
I've compiled both Numpy and Python 2.5 with the Intel compiler. On a
Core 2 Duo, at least, the speed increase on Pybench was ~49%, even
before compiling Python with icc. My post about it was on 25 Jan, and
has subject: Compiling Python with icc
15 matches
Mail list logo