--- Comment #11 from Arun Raghavan <> ---
(In reply to Tanu Kaskinen from comment #10)
> (In reply to Arun Raghavan from comment #6)
> > I'm quite against the idea of having codec support in PulseAudio itself.
> > 
> > In my opinion, the right way to do this is to first move our RTP support to
> > use GStreamer under the hood, and then potentially use that to do encoding
> > if needed.
> The RTP modules are not useful when talking about a tunnel setup or a direct
> client-server connection over TCP. Can you clarify, are you against any
> compressed audio implementation in the native protocol, and if yes, why
> exactly?
> There's a new version of the opus patch, and I thought I'd start reviewing
> it:

I don't think _any_ part of PulseAudio should be talking to specific codecs.
The SBC bit for BlueZ is a bit of an aberration (mostly because it's the only
mandatory codec in that spec and that is permanent).

The reason for this is that today we add Opus support (if this was a few years
ago, it might have been Vorbis), then we'll also want FLAC support, and then
maybe MP3/AAC. And on embedded platforms, we might want to use h/w acceleration
for these, and so on.

Basically, this works as a nice hack to get Opus support (which is great), but
in terms of long-term maintainability it puts us it either means freezing the
protocol on one codec, or a bunch of code talking to a bunch of different

Which is why I think the right thing for us to do for all things in PulseAudio
that need codec support is to use an underlying library, and GStreamer imo is a
good fit for what we want to do. At some point, it would probably be nice to
add GStreamer API to give us something closer to what we want -- API to provide
a block of audio and get back a compressed frame -- but this is achievable
today anyway.

You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.
pulseaudio-bugs mailing list

Reply via email to