A couple of observations and comments to the long discussion from a lurker on this list, obviously with a big IMHO disclaimer. I ended up writing too much, but the final conclusion is short ;-)

Bill Cox wrote, On 01/01/2010 03:52 AM:
Since I didn't get much response with my more polite e-mail, here's
what I really think, given my current ignorance about pulseaudio...

PulseAudio is cool, but I fear it's over-engineered by some Ph.D's
with too much elegance in their solution, and not enough real world
experience.  Run as user?  Really?

Yes. I do think it makes perfect sense that the mixing of _my_ sounds and control of a hardware audio device to which _I_ have exclusive access is done by _my_ service running as _me_. PulseAudio is not perfect, but it is heading in the right direction for _that_ scenario.

The use-case you point out seems to be a completely different scenario. You need sound control and mixing before the user logs in, and you need sound from a system daemon to be played no matter what else the user is playing. It really seems like you need system-level sound mixing, not user-level sound mixing. That is not in PulseAudios scope, just like textual consoles and pre-X boot messages isn't the X servers business.

PA do not create an architecture where screen readers fits in. But PA can be a part of an architecture where the screen reader problems can be solved.

Once the users session and PA is up and running then the users PA should be able to handle speech from the users applications (through orca?). The question is if you want PA to handle that kind of speech or not?

If you don't want PA at all then PA obviously can't help you ... perhaps except by making you change your mind ;-)

If you want PA for some purposes then all other audio applications (such as system/console screen readers, speakup?) must play by PAs rules and use CK to take and release the exclusive access to the physical devices properly.

If you want PA but don't want the "the user has exclusive access to the sound hardware" concept then there _must_ be some system-level sound mixing that exposes virtual audio devices to which PA and the screen reader software can get exclusive access.

Markus pretty much summarized these options, but I think "play by CKs rules" is missing from the list. Dmix (or PA) as a system-level mixer do however also seem like something that should be investigated.

If you think you've got a good reason to do this, is it more important
than sacraficing accessibility for the blind?  The worst disaster for
accessibility for the blind and visually impaired has been adoption of
PulseAduio by the major distros.  I'm personally spending insane hours
trying to fix this mess, and frankly I could use some direction.
We've got Orca mostly working now, but the other essential app -
speakup - is still in limbo.

It seems to me like the old architecture with alsa and screen readers was pretty much hacks. It evolved that way for good historical reasons and worked and solved an important problem, but still it was hacks, and hacks often either prevents further development from happen or breaks when development happens anyway. PulseAudio is different from ALSA, and thus it requires different hacks. Some hacks (or PhD-engineered elegant solutions) might be needed in PA, but probably even more in other applications which the PA developers don't, can't and shouldn't consider their responsibility. Thanks for looking into this!

Pointing out that there are problems the developers perhaps wasn't aware of is fine, but insinuating that they are discriminating and sacrificing accessibility for the blind just because something breaks isn't fair.

Now the blind community has no pull.  We can't tell Ubuntu to run
PulseAudio as a normal deamon.  As a result, our computers come up
talking but then can't talk once the user logs into gnome.  This is
because speakup launches a process that starts pulseaudio as the gdm
user, and since that process continues forever, the gdm copy of
pulseaudio never dies, and the user's gnome session gets no access to
the sound card, and Orca wont talk.

I just need a solution.  I'm frankly hoping to get more response to
this more emotional e-mail than my previous polite one.  I promise to
be nice once I'm convinced we're not actually letting a bunch of
inexperienced coders undermine the Linux sound system, which is likely
to happen once I'm no longer ignorant of what the heck this user-land
stuff is all about, and when I learn how to write code that gives the
blind speach on their Ctrl+Alt+F1 consoles from boot, as well as after
they login.

You know what it's like trying to help a blind user through e-mail to
figure out what to do when the computer just stops talking?  Ever try
to explain to a user over the phone how to use a graphical
application?  It's much worse than that.  The sound system needs to
work at boot, when we log in, and in fact all the time.  Is that too
much to ask?  That's what I require from Ubunut/Lucid.  I'm willing to
write the code to make it happen.  Can anyone please advise me on what
code needs to be written to get speakup and Orca to both work with
pulseaudio, from boot, after logging into gnome, and on the console
windows?

It seems to me like the short answer is that if you want everything to work "as before PA", both before PA starts and when switched to text consoles, then the other sound applications (speakup?) just(!) has to use ConsoleKit to take and release the exclusive access to the audio hardware through ALSA.

(I haven't seen that stated clearly in the many mails I have seen. Someone will correct me if I'm wrong. ;-) )

/Mads

_______________________________________________
pulseaudio-discuss mailing list
pulseaudio-discuss@mail.0pointer.de
https://tango.0pointer.de/mailman/listinfo/pulseaudio-discuss

Reply via email to