Hi, I have a question as to something I've been curious about for a while now, not only with the braillenotes and voicenotes but with other blindness-related technology. Perhaps though this is a question for Jonathan or Dean to answer... How is the speech in our notetakers produced? I know that it uses the Keynote gold multimedia adapted to run under Windows CE 2.12 or 4.20. But specifically what produces the sound? Is it a chip or samples of human speech? Do speech chips, that is if any exist at all in the braillenotes have moveable parts that somehow make an s sound like an s and an a sound like a in father or a in watch? I'm not asking specifically how the Keynote speech is made. Just, how in general does speech synthesis work. Where and how is the sound made? Where does the sound come from? I know now that the latest trend in speech is unit selection where speech software synthesizers can be anywhere from 50mb to 1.2gb for the really high-quality ones which uses segments of pre-record
ed human speech from an actual person reading. But I'm sure Keynote and other speech synthesizers for the braillenote and other notetakers do not use this method of unit selection. I've always been interested in this topic and am really not sure where else to ask it. I did learn however that the first text-to-speech synthesis machine was invented in the late 1700s by a Hungarian man. It used levers, tubes, and other moveable parts and perhaps cylinders to simulate the human vocal tract. I have no interest in creating a software synthesizer or hardware-based one, human, unit selection or otherwise. I'm just interested in the basics or the details, but still the basics of how it all works. Josh
