At 03:28 PM 3/3/2008, Ann wrote:
> >Sounds familiar. If you have a good signal-to-noise ratio, you can get
> >subpixel accuracy by oversampling the irfft, or better but slower, by
> >using numerical optimization to refine the peak you found with argmax.
the S/N here is poor, and high data rates wo
Thank you for the input!
It sounds like Fourier methods will be fastest, by design, for sample
counts of hundreds to thousands.
I currently do steps like:
Im1 = get_stream_array_data()
Im2 = load_template_array_data(fh2)
##note: len(im1)==len(im2)
Ffft_im1=fftpack.rfft(Im1)
Ffft_im2=fftpack.rfft(
At 01:24 PM 3/3/2008, you wrote:
> > If you use 'same' or 'full' you'll end of with different
> >amounts of offset. I imagine that this is due to the way the data is padded.
> >The offset should be deterministic based on the mode and the size of the
> >data, so it should be straightforward to compe
I'm trying to figure out what numpy.correlate does, and, what are
people using to calculate the phase shift of 1D signals?
(I coded on routine that uses rfft, conjugate, ratio, irfft, and
argmax based on a paper by Hongjie Xie "An IDL/ENVI implementation
of the FFT Based Algorithm for Automat
numpy array objects
http://effbot.org/zone/pil-changes-116.htm
frombuffer, fromstring, fromarray, tostring etc.
http://www.pythonware.com/library/pil/handbook/image.htm
(I've used them for home astronomy projects, myself.)
Ray Schumacher
Blue Cove Interactive
No virus found in this outgoin
We've just built Python 2.4 in the lab with Intel ICC and MS VS 2005,
but had problems building 2.5, and also numpy, with the MKL.
Would someone be willing to share their build experience or project files?
Ray
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7
distros, and our company now has
(finally) decided to purchase the ICC and MKL with the intention of
compiling for P4 and Core2 targets. The binaries could then go up on
the company's web site.
Thanks,
Ray Schumacher
--
No virus found in this outgoing message.
Checked by AVG Free Edition
sourceforge.net/
seem like the ticket.
Whatever happened to pyGA?
GA http://www.emsl.pnl.gov/docs/global/ is still around.
http://www.ece.lsu.edu/jxr/pohll-02/papers/jarek.pdf
Best,
Ray Schumacher
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Da
r numpy - I had seen a
poster who offered help. When the company actually purchases the
product I'd be glad to do it on 2-3 targets if someone can assist
with the parameters. We have one consultant here who has done it on Linux.
Ray Schumacher
Congitive Vision
8580 Production Ave.,
At 10:57 PM 11/1/2007, Charles R Harris wrote:
> > An additional complication is that I pass the numpy (or Numeric)
> > array address to the ctypes library call so that the data is placed
> > directly into the array from the call. I use the if/else end wrap
> > logic to determine whether I need to
At 11:55 PM 10/31/2007, Travis wrote:
>Ray S wrote:
> > I am using
> > fftRes = abs(fft.rfft(data_array[end-2**15:end]))
> >
>At first glance, I would say that I don't expect memory to be growing
>here, so it looks like a problem with rfft that deserves looking into.
I saw that Numeric did also (I
to
create arrays from the other process's address and laying a numpy
array on top was prone to that in experimentation. But I had the same
issue as Mark Heslep
http://aspn.activestate.com/ASPN/Mail/Message/ctypes-users/3192422
of creating a numpy array fro
Geoffrey Zhu wrote:
> Hi,
>
> I am about to write a C extension module. C functions in the module will
> take and return numpy arrays. I found a tutorial online, but I am not
> sure about the following:
I agree with others that ctypes might be your best path.
The codeGenerator is magic, if yo
Andrew added:
I'll pitch in a few donuts (and
my eternal gratitude) for an example of
shared memory use using numpy arrays that is cross platform, or at
least works in linux, mac, and windows.
I thought that getting the address from the buffer() of the array and
creating a new one from it in th
After Googling for examples on this, in the Cookbook
http://www.scipy.org/Cookbook/Multithreading
MPI and POSH (dead?), I don't think I know the answer...
We have a data collection app running on dual core processors; I start
one thread collecting/writing new data directly into a numpy circular
I'm still curious about the licensing aspects of using Intel's
compiler and libs. Is the compiled Python/numpy result distributable,
like any other compiled program?
Ray
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scip
Thanks Rex,
I'll give it a try next week.
>I've compiled both Numpy and Python 2.5 with the Intel compiler. On a
>Core 2 Duo, at least, the speed increase on Pybench was ~49%, even
>before compiling Python with icc. My post about it was on 25 Jan, and
>has subject: Compiling Python with icc
_
Has anyone built Python/numpy with the Intel optimized compiler and
FFT lib for Microsoft, and have any pointers?
We're counting on the extra speed, and will be getting the compiler
and libraries next week.
Is there a consensus on distribution requirements for Python compiled
with the Intel co
On 3/14/07, "Charles R Harris" wrote:
> Sounds like you want to save cpu cycles.
> How much you can save will depend
> on the ratio of the bandwidth to the nyquist.
The desired band is rather narrow, as the goal is to determine the f of a
peak that always occurs in a narrow band of about 1kHz
19 matches
Mail list logo