> -----Original Message----- > From: Dev [mailto:[email protected]] On Behalf Of Jussi Laako > Sent: Friday, April 11, 2014 6:01 AM > To: [email protected] > Subject: Re: [Dev] Display servers (was Cynara) > > Speaking of display server's I find it hilarious that keyboard, touch, mouse > and video output somehow belong together, but audio is always outside the > picture.
This is an artifact of the relative late addition of sound to the User Experience. Video, keyboards and mice were well understood and integrated long (in computer terms) before audio became normal. If real sound cards had been included in the original (128k) MacIntosh, and moving the mouse had been accompanied by a swooshing noise, we'd have audio integrated today. > Does Siri voice input in iOS go through display server? I don't think so. > > Why would audio be somehow special compared to touch, mouse, keyboard > or video? How about haptic feedback or accelerometers? Because voice processing has never caught on. Imagine a cube farm where everyone is talking to their computers. One Loud Howard in the room and everyone's programs look like his. > In Tizen, pulseaudio is audio equivalent of the display server. Why doesn't > pulseaudio hook into all keyboard, mouse and touch events? It doesn't need to. > Better to keep all those separate and not create "all encompassing" mega > notreally-display -server that would be security and privacy disaster. Right. But we all know that if security makes the mouse jerky or the frame rate fall below 60 FPS it's outta there. > _______________________________________________ > Dev mailing list > [email protected] > https://lists.tizen.org/listinfo/dev _______________________________________________ Dev mailing list [email protected] https://lists.tizen.org/listinfo/dev
