On Wed, May 3, 2017 at 4:12 PM, Muhammad Faiz <mfc...@gmail.com> wrote: > On Wed, May 3, 2017 at 1:47 AM, Paul B Mahol <one...@gmail.com> wrote: >> On 5/2/17, Muhammad Faiz <mfc...@gmail.com> wrote: >>> On Mon, May 1, 2017 at 3:30 PM, Paul B Mahol <one...@gmail.com> wrote: >>>> Signed-off-by: Paul B Mahol <one...@gmail.com> >>>> --- >>>> configure | 2 + >>>> doc/filters.texi | 10 ++ >>>> libavfilter/Makefile | 1 + >>>> libavfilter/af_afirfilter.c | 409 >>>> ++++++++++++++++++++++++++++++++++++++++++++ >>>> libavfilter/allfilters.c | 1 + >>>> 5 files changed, 423 insertions(+) >>>> create mode 100644 libavfilter/af_afirfilter.c >>>> >>>> diff --git a/configure b/configure >>>> index b3cb5b0..7fc7af4 100755 >>>> --- a/configure >>>> +++ b/configure >>>> @@ -3078,6 +3078,8 @@ unix_protocol_select="network" >>>> # filters >>>> afftfilt_filter_deps="avcodec" >>>> afftfilt_filter_select="fft" >>>> +afirfilter_filter_deps="avcodec" >>>> +afirfilter_filter_select="fft" >>>> amovie_filter_deps="avcodec avformat" >>>> aresample_filter_deps="swresample" >>>> ass_filter_deps="libass" >>>> diff --git a/doc/filters.texi b/doc/filters.texi >>>> index 119e747..ea343d1 100644 >>>> --- a/doc/filters.texi >>>> +++ b/doc/filters.texi >>>> @@ -878,6 +878,16 @@ afftfilt="1-clip((b/nb)*b,0,1)" >>>> @end example >>>> @end itemize >>>> >>>> +@section afirfilter >>>> + >>>> +Apply an Arbitary Frequency Impulse Response filter. >>>> + >>>> +This filter uses second stream as FIR coefficients. >>>> +If second stream holds single channel, it will be used >>>> +for all input channels in first stream, otherwise >>>> +number of channels in second stream must be same as >>>> +number of channels in first stream. >>>> + >>>> @anchor{aformat} >>>> @section aformat >>>> >>>> diff --git a/libavfilter/Makefile b/libavfilter/Makefile >>>> index 66c36e4..1a0f24b 100644 >>>> --- a/libavfilter/Makefile >>>> +++ b/libavfilter/Makefile >>>> @@ -38,6 +38,7 @@ OBJS-$(CONFIG_AEMPHASIS_FILTER) += >>>> af_aemphasis.o >>>> OBJS-$(CONFIG_AEVAL_FILTER) += aeval.o >>>> OBJS-$(CONFIG_AFADE_FILTER) += af_afade.o >>>> OBJS-$(CONFIG_AFFTFILT_FILTER) += af_afftfilt.o >>>> window_func.o >>>> +OBJS-$(CONFIG_AFIRFILTER_FILTER) += af_afirfilter.o >>>> OBJS-$(CONFIG_AFORMAT_FILTER) += af_aformat.o >>>> OBJS-$(CONFIG_AGATE_FILTER) += af_agate.o >>>> OBJS-$(CONFIG_AINTERLEAVE_FILTER) += f_interleave.o >>>> diff --git a/libavfilter/af_afirfilter.c b/libavfilter/af_afirfilter.c >>>> new file mode 100644 >>>> index 0000000..ef2488a >>>> --- /dev/null >>>> +++ b/libavfilter/af_afirfilter.c >>>> @@ -0,0 +1,409 @@ >>>> +/* >>>> + * Copyright (c) 2017 Paul B Mahol >>>> + * >>>> + * This file is part of FFmpeg. >>>> + * >>>> + * FFmpeg is free software; you can redistribute it and/or >>>> + * modify it under the terms of the GNU Lesser General Public >>>> + * License as published by the Free Software Foundation; either >>>> + * version 2.1 of the License, or (at your option) any later version. >>>> + * >>>> + * FFmpeg is distributed in the hope that it will be useful, >>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of >>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU >>>> + * Lesser General Public License for more details. >>>> + * >>>> + * You should have received a copy of the GNU Lesser General Public >>>> + * License along with FFmpeg; if not, write to the Free Software >>>> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA >>>> 02110-1301 USA >>>> + */ >>>> + >>>> +/** >>>> + * @file >>>> + * An arbitrary audio FIR filter >>>> + */ >>>> + >>>> +#include "libavutil/audio_fifo.h" >>>> +#include "libavutil/avassert.h" >>>> +#include "libavutil/channel_layout.h" >>>> +#include "libavutil/common.h" >>>> +#include "libavutil/opt.h" >>>> +#include "libavcodec/avfft.h" >>>> + >>>> +#include "audio.h" >>>> +#include "avfilter.h" >>>> +#include "formats.h" >>>> +#include "internal.h" >>>> + >>>> +typedef struct FIRContext { >>>> + const AVClass *class; >>>> + >>>> + int n; >>>> + int eof_coeffs; >>>> + int have_coeffs; >>>> + int nb_taps; >>>> + int fft_length; >>>> + int nb_channels; >>>> + int one2many; >>>> + >>>> + FFTContext *fft, *ifft; >>>> + FFTComplex **fft_data; >>>> + FFTComplex **fft_coef; >>> >>> Probably you may use rdft for performance reason. >> >> I will concentrate on correctness of output first. > > OK. > >> >>> >>> >>> >>>> + >>>> + AVAudioFifo *fifo[2]; >>>> + AVFrame *in[2]; >>>> + AVFrame *buffer; >>>> + int64_t pts; >>>> + int hop_size; >>>> + int start, end; >>>> +} FIRContext; >>>> + >>>> +static int fir_filter(FIRContext *s, AVFilterLink *outlink) >>>> +{ >>>> + AVFilterContext *ctx = outlink->src; >>>> + int start = s->start, end = s->end; >>>> + int ret = 0, n, ch, j, k; >>>> + int nb_samples; >>>> + AVFrame *out; >>>> + >>>> + nb_samples = FFMIN(s->fft_length, av_audio_fifo_size(s->fifo[0])); >>>> + >>>> + s->in[0] = ff_get_audio_buffer(ctx->inputs[0], nb_samples); >>>> + if (!s->in[0]) >>>> + return AVERROR(ENOMEM); >>>> + >>>> + av_audio_fifo_peek(s->fifo[0], (void **)s->in[0]->extended_data, >>>> nb_samples); >>>> + >>>> + for (ch = 0; ch < outlink->channels; ch++) { >>>> + const float *src = (float *)s->in[0]->extended_data[ch]; >>>> + float *buf = (float *)s->buffer->extended_data[ch]; >>>> + FFTComplex *fft_data = s->fft_data[ch]; >>>> + FFTComplex *fft_coef = s->fft_coef[ch]; >>>> + >>>> + memset(fft_data, 0, sizeof(*fft_data) * s->fft_length); >>>> + for (n = 0; n < nb_samples; n++) { >>>> + fft_data[n].re = src[n]; >>>> + fft_data[n].im = 0; >>>> + } >>>> + >>>> + av_fft_permute(s->fft, fft_data); >>>> + av_fft_calc(s->fft, fft_data); >>>> + >>>> + fft_data[0].re *= fft_coef[0].re; >>>> + fft_data[0].im *= fft_coef[0].im; >>>> + for (n = 1; n < s->fft_length; n++) { >>>> + const float re = fft_data[n].re; >>>> + const float im = fft_data[n].im; >>>> + >>>> + fft_data[n].re = re * fft_coef[n].re - im * fft_coef[n].im; >>>> + fft_data[n].im = re * fft_coef[n].im + im * fft_coef[n].re; >>>> + } >>>> + >>>> + av_fft_permute(s->ifft, fft_data); >>>> + av_fft_calc(s->ifft, fft_data); >>>> + >>>> + start = s->start; >>>> + end = s->end; >>>> + k = end; >>>> + >>>> + for (n = 0, j = start; j < k && n < s->fft_length; n++, j++) { >>>> + buf[j] = fft_data[n].re; >>>> + } >>>> + >>>> + for (; n < s->fft_length; n++, j++) { >>>> + buf[j] = fft_data[n].re; >>>> + } >>>> + >>>> + start += s->hop_size; >>>> + end = j; >>>> + } >>>> + >>>> + s->start = start; >>>> + s->end = end; >>>> + >>>> + if (start >= nb_samples) { >>>> + float *dst, *buf; >>>> + >>>> + start -= nb_samples; >>>> + end -= nb_samples; >>>> + >>>> + s->start = start; >>>> + s->end = end; >>>> + >>>> + out = ff_get_audio_buffer(outlink, nb_samples); >>>> + if (!out) >>>> + return AVERROR(ENOMEM); >>>> + >>>> + out->pts = s->pts; >>>> + s->pts += nb_samples; >>> >>> Is pts handled correctly here? Seem it is not derived from input pts. >>> >> >> It can not be derived in any other way. > > Probably, at least, first pts should be derived from input pts. > Also, is time_base always 1/sample_rate? > > Thank's.
Probably, like in asetnsamples filter. Thank's. _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel