Honi has come to the last few community meetings to advocate for this. I think there’s a soft consensus to merge a new maxlag or maxlags argument (spelling TBD) to the correlate and convolve python APIs and not change any C API.
While normally we would say that this should go into SciPy, there are certain classes of problems where improving performance via maxlag is faster than using SciPy’s FFT approach. This will also allow matplotlib to eventually delete a workaround they have for this specific issue, and matplotlib doesn’t depend on SciPy. The array API isn’t a concern, because correlate and convolve were excluded from the standardization process. Maybe they’ll be standardized in the future. Honi also reached out to SciPy and Jax and didn’t hear back. I’d appreciate readers of this mailing list from downstream projects who attempt to emulate NumPy’s API and might be affected to register any objections they might have. On Tue, Apr 21, 2026 at 6:39 AM Honi Sanders via NumPy-Discussion < [email protected]> wrote: > Your message to the NumPy-Discussion mailing-list was rejected for the > following > reasons: > > The message is not from a list member > > The original message as received by Mailman is attached. > > *From: *Honi Sanders <[email protected]> > *Subject: **Proposed feature: add custom lags parameters to > correlate/convolve* > *Date: *April 20, 2026 at 3:16:43 PM EDT > *To: *[email protected] > > > Hello all, > > > I'd like to solve a missing feature in np.correlate and np.convolve. > > Sometimes when computing a cross-correlation or convolution you are only > interested in the result at a small number of lags. For example: > > - > > Cross-correlating two long time series and wanting only the > correlation within a bounded window around lag 0. > - > > Having millisecond-precision time series but only wanting convolution > outputs at minute-scale lags. > > The current implementation computes every lag unconditionally — an > expensive operation — even when most of the output is discarded. I wrote PR > #31261 to address this: > > https://github.com/numpy/numpy/pull/31261 > > This PR revives an attempt from ten years ago (PR #5978 > <https://github.com/numpy/numpy/pull/5978>) that attracted significant > interest but I couldn't get it to Numpy’s high implementation standards. > The feature was also raised as issues on NumPy (#5954 > <https://github.com/numpy/numpy/issues/5954>) and SciPy (scipy/scipy#4940 > <https://github.com/scipy/scipy/issues/4940>), discussed on the scipy-dev > list <http://mail.scipy.org/pipermail/scipy-dev/2015-June/020757.html>, > and got attention on Stack Overflow > <https://stackoverflow.com/questions/30677241/how-to-limit-cross-correlation-window-width-in-numpy/47893831#47893831> > at the time. The current open issue it resolves is #17286 > <https://github.com/numpy/numpy/issues/17286>. > > ------------------------------ > > Proposed API > > np.correlate(a, v, mode=..., *, maxlag=None, lags=None) > > np.convolve(a, v, mode=..., *, maxlag=None, lags=None) > > np.correlate_lags(a_len, v_len, mode=..., *, maxlag=None, lags=None) > > Parameter design > > - > > maxlag=M (int): symmetric inclusive window [-M, M] (2M+1 lags total). > Matches MATLAB's xcorr(x, y, M) convention. > - > > lags=: a range, a slice with explicit start/stop, or a 1-D integer > array-like containing an arithmetic progression. > - > > np.correlate_lags(a_len, v_len, ...): a companion function (modeled on > scipy.signal.correlation_lags) that returns the array of lag indices > corresponding to a given correlate or convolve call, without requiring the > arrays themselves. > > ------------------------------ > > Related library notes > > Matplotlib already has a maxlags argument on Axes.xcorr: > > > https://github.com/matplotlib/matplotlib/blob/dde076379dad6a51374625dbaa729a71958d4d88/lib/matplotlib/axes/_axes.pyi#L218 > > Its current implementation calls numpy.correlate(..., 'full') and discards > the unwanted entries — wasting a lot of calculations. NumPy implementing > this feature would allow Matplotlib to immediately benefit in performance > without changing their existing API by just updating the numpy correlate > call. > > Neural network libraries like Pytorch and and mxnet rely on a stride > parameter for their convolution functions to avoid calculating convolutions > at every lag (e.g. > https://docs.pytorch.org/docs/stable/generated/torch.nn.modules.conv.Conv1d.html), > similar to the stride entry in the range construction for the lags > parameter proposed here. > > ------------------------------ > > Previous discussions on this list > > - > > > > https://mail.python.org/archives/list/[email protected]/thread/JMXNOCFQRFJYISFKSRI7MRL7GPK5ZD4S/ > - > > > > https://mail.python.org/archives/list/[email protected]/thread/6FC27XPOD4SBH3KJ7VY5DDAPS5Q46GPO/ > (several people chimed in with interest in the feature) > > ------------------------------ > > > Open question: > > C API > > The PR also adds PyArray_CorrelateLags to the C API. I am not sure who > the intended users of the C API are and whether they would be interested in > this functionality as long as we are building it. Feedback welcome. > > Companion function > > The companion function that returns the matching lagvector is based on the > scipy companion function that is called correlate_lags(). A name like > generate_lagvector() would be more descriptive but inconsistent with > scipy’s naming. Any thoughts on that? > > > > Thanks for your time, > > Honi > _______________________________________________ > NumPy-Discussion mailing list -- [email protected] > To unsubscribe send an email to [email protected] > https://mail.python.org/mailman3//lists/numpy-discussion.python.org > Member address: [email protected] >
_______________________________________________ NumPy-Discussion mailing list -- [email protected] To unsubscribe send an email to [email protected] https://mail.python.org/mailman3//lists/numpy-discussion.python.org Member address: [email protected]
