On 04/05/2010 02:43 AM, Stefano Sabatini wrote:
On date Thursday 2010-04-01 11:53:42 -0700, Baptiste Coudurier encoded:
On 03/31/2010 10:44 PM, S.N. Hemanth Meenakshisundaram wrote:
Hi All,
Based on the posts above, here's what I was thinking of doing as a
qualification task :
1. Fix the configure_filters function in ffmpeg.c to work for both video
and audio filters.
2. Bring the af_null and af_vol filters from soc/afilters to
soc/avfilter and generalize avfilter framework to work with the af_*
filters. This should involve a minimal design for the avfilter audio
changes.
3. af_vol currently just scales input by two. Modify af_vol to have some
basic volume filter functionality.
Please let me know if I have understood the tasks right and if this will
work as a qualification task.
I believe that is even too burden as a qualification task, much of the
libavfilter audio design has yet to be done.
A qualification task could be to add AVCodecContext get_buffer
functionality to audio decoders.
As alternative to this, any video libavfilter task should be OK
(e.g. port a filter from MPlayer/libmpcodecs or write a new one), note
that vf_imlib.c is *not* a valid qualification task (see
https://roundup.ffmpeg.org/issue302), this should help you to get
confident with the FFmpeg/libavfilter API.
I especially recommend: a deinterlace filter (check the code in
libavcodec/imgconvert.c), a libfreetype wrapper (check as reference
vhook/drawtext.c in ffmpeg-0.5).
Ask here or on IRC if you have questions.
Hi,
For the deinterlace filter, do we want just a vf_* wrapper for
imgconvert functions or should the code be ported to the vf_deinterlace
filter?
I have been working on getting the audio filters from soc/afilters to
work more like the current video filters and wanted to check that am on
the right track :
1. Client will use a 'asrc_buffer_add_samples' function to give buffers
to the source input filter or define an input filter that can fetch
audio data.
2. Client then calls request_buffer on output filter, request_buffers
propagate all the way to source (input filter).
3. The buffer is then passed along the filter chain and processed.
Also have a couple of questions :
1. Should the formats for audio be from the SampleFormat enum
(libavcodec/avcodec.h) or the PCM formats in the CodecID enum. The old
audio filter seems to be wrongly using both in two different places.
2. While video filters work a single frame at a time, audio would need
to work on a buffer. Is it ok to use a single call similar to the
current 'filter_buffer' to get this done instead of the start_frame,
draw_slice, end_frame sequence in video?
I am planning to do the deinterlace, libfreetype first and then continue
with this.
Thanks,
Hemanth
_______________________________________________
FFmpeg-soc mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/ffmpeg-soc