I'm assembling sentences to be spoken using individually recorded words.
Sometimes some words are not played completely. It seems to depend on
some quanta phenomenon with the sound device ("MacSoundManager" but it
also happens on Windows).
Depending on what's in front of it "Whats next" sounds like "whats
neck", "Whats next", "whatsxt", "next"
It happens with imported AIF files and with imported SWA files made from
the AIFs
The HandlerInQuestion (with some record keeping junk removed):
on saySentence sentence
sound(1).stop()
updateStage
wordtotal = sentence.word.count
playList = []
-- queue up the words
repeat with x = 1 to wordtotal
theword = sentence.word[x]
add(playlist, [#member:(member theword of castlib "sounds")])
end repeat
sound(1).setplaylist(playlist)
-- and play the sentence
sound(1).play()
end
Any tips or pointers or explanations of what's going on?
Are there particular lengths of sounds that will work best?
--
Carl West [EMAIL PROTECTED]
617.262.8830 x246
"Depend upon it, there comes a time when, for every addition of
knowledge, you forget something that you knew before. It is of the
highest importance, therefore, not to have useless facts elbowing out
the useful ones."
-Sherlock Holmes in 'A Study in Scarlet'
[To remove yourself from this list, or to change to digest mode, go to
http://www.penworks.com/LUJ/lingo-l.cgi To post messages to the list,
email [EMAIL PROTECTED] (Problems, email [EMAIL PROTECTED])
Lingo-L is for learning and helping with programming Lingo. Thanks!]