Allright, you developers,
Here is another tricky question. Hope someone has a solution, as it would have 
improved my projects a bit.

Le'ts for samplification say, that a user is operating his computer by 
Eloquence Reed. Like many users actually are doing. 

My app sends a few messages to the synthesizer, like status messages in given 
operations, or resulting messages from other operations. Since they are all 
sent to the same synth, it is likely it only will drown in the amount of other 
activity the user is performing. And, he may not be able to discern what the 
message is about. This could specially be the case, when an app is performing 
some activity in the background, or in a separate window, and the user 
currently is reading a document. Now, if the status message from the app pops 
in, right in the middle of the document reading, either the user will think it 
may be part of the document, or he may experience a break-down of the speech 
altogether. 

It would have many benefits, if the app could have its "internal" messages sent 
to another voice, available on the user's system. Now, with the 
    Speak
command, in WE, is there a way to redirect the output to a secondary synth or 
voice? Or, is there a "back-door" workaround to this issue?

I greatly would welcome any ideas you all may have, and maybe you have implied 
any such solution in your apps already, and could please share some techniques? 
Thanks alot,

Reply via email to