Hi Alexander,
On 2024-03-14 22:43:38, Alexander Levin via NumPy-Discussion
<[email protected]> wrote:
Memory Usage -
https://github.com/2D-FFT-Project/2d-fft/blob/testnotebook/notebooks/memory_usage.ipynb
Timing comparisons(updated) -
https://github.com/2D-FFT-Project/2d-fft/blob/testnotebook/notebooks/comparisons.ipynb
I see these timings are still done only for power-of-two shaped
arrays. This is the easiest case to optimize, and I wonder if
you've given further thought to supporting other sizes? PocketFFT,
e.g., implements the Bluestein / Chirp-Z algorithm to deal with
cases where the sizes have large prime factors.
Your test matrix also only contains real values. In that case, you
can use rfft, which might resolve the memory usage difference? I'd
be surprized if PocketFFT uses that much more memory for the same
calculation.
I saw that in the notebook code you have:
matr = np.zeros((n, m), dtype=np.complex64)
matr = np.random.rand(n, m)
Was the intent here to generate a complex random matrix?
Stéfan
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: [email protected]