On 04/15/2011 08:28 PM, Tanu Kaskinen wrote:
On Fri, 2011-04-15 at 23:21 +0200, Maarten Bosmans wrote:
Do the developers of those applications feel the same? Perhaps this is
a right time to step back and think about what we want to achieve. I'd
say that some sort of client-side access to volume ramping
(fade-in/fade-out) is appropriate. But effects like equalizer is
probably better off in a gstreamer pipeline.
Doing effects in Pulseaudio removes the need to implement equalizer
support in each and every music player. With better filter support it
should also be easy to configure per-output eq settings. If the user
uses both speakers and headphones, the same settings very likely aren't
ideal for both outputs. Clients shouldn't care about the current
routing, so this is another reason why Pulseaudio is the right place for
user equalizers.
Also doing it in pulseaudio prevents some applications from slipping
through without the desired effects being applied, such as flash. This
was a primary motivator for putting it in pulseaudio when I first wrote it.
I'm sorry for sounding perhaps a bit grumpy, but I am a bit hesitant.
This is really something that should be planned carefully. PulseAudio
can't be everything for everybody. That said, it is perfectly possible
that I miss some important usage scenario's, especially you seem to be
working on embedded stuff, with other demands on pulse.
Maybe not for every filter but just as the equalizer is easily
justified, other plugins as well (ie normalized volume, preventing
commercials from blasting your eardrums) - especially considering
internet tv (hulu/youtube etc) will not be cooperative with any level
above pa. dependancy on floating point keeps it above kernel and
potentially out of alsa (not sure here, but also alsa limits reusability
assuming pa can run on multiple platforms).
Embedded systems (or at least phones) require very flexible filter
setups at the level where Pulseaudio is operating. There are many
filters that need to be dynamically enabled and disabled at runtime.
You may be right in that this sounds like reimplementing gstreamer,
though. At least I have had some ideas about reimplementing the whole
audio processing pipeline (volumes, mixing, resampling, filters) as
abstract elements like in gstreamer... The reason being mostly that then
rewinds could maybe be done in some generic (i.e. understandable) way
instead of the current approach that seems like a mess (on the surface
at least - I haven't *really* tried to grok it).
I believe I pitched gstreamer to Lennart before feeling the distaste of
reimplementation and he said this was a bad idea because of all the
threading/latency stuffs going on there.
One problem I faced in developing the equalizer module is the latency at
which the filter runs at (=required # of samples before an output is
defined) did not run well with rewinds and pa autolatency-adjustement
jazz. Because of this, and what I was told about the method of which
I'm attenuating frequency responses from the speex codec author, I will
be, at some point hopefully in the somewhat near future, switching to an
IIR filter which will have 1 (or some such low #) sample latency
hopefully so I can so I don't kludge up the rest of the pipeline. In
doing so I will unfortunately make the filtering code more complex and I
will lose the resolution of the attenuation adjustment. It will be
closer over all to more typically equalizers seen in the wild with 8 or
so fixed bands.
_______________________________________________
pulseaudio-discuss mailing list
pulseaudio-discuss@mail.0pointer.de
https://tango.0pointer.de/mailman/listinfo/pulseaudio-discuss