You could use SAPI like this:

' SapiSpeak

wScript.Sleep 2000

strText = "Please wait while we get ready for lunch! "

Set objVoice = CreateObject("SAPI.SpVoice")
objVoice.Speak strText

hth
Jeff Weiss


From: Chip Orange [mailto:[email protected]]
Sent: Friday, January 17, 2014 10:41 AM
To: [email protected]
Subject: RE: Using a secondary voice for speech messages

Hi David,

I absolutely agree with you about the need for this.  I've tried to add it to a 
couple of my apps, but not had a lot of luck, because the WE object model just 
doesn't support this idea.

It does support it just a little, in that the second parameter of the speak 
method allows you to specify whether it should be spoken in the screen, mouse, 
or keyboard voice settings.  This requires the user to go and alter these voice 
settings (which they may not want to do), and the only thing it allows you to 
change is the basic pitch or tone of the synthesizer in use, but you cannot use 
more than one synthesizer concurrently.

So, I tried to do this in my apps without requiring the user to alter the 
keyboard, mouse, or screen voices, and even without changing synthes I ran into 
the problem Jim brings up: there's some delay introduced even when staying with 
the same synthesizer.  If you try to change synthesizers the delay is 
intolerable.

If you stay with the same synth, and you want this just to happen automatically 
without requiring the user to go alter their basic 3 voice settings, it's very 
difficult to determine which changes to speech parameters will actually produce 
a noticib le change in the sound of the voice.  You can look at my app named 
Word Advanced Features for some code where I try to deal with this 
automatically, but I've come to believe it may be better just to ask the user 
to dedicate one of the mouse or keyboard voices to this function, and then ask 
them to make the changes they wish, so that they will be able to notice the 
difference in voice sound.

Hth,

Chip



From: David [mailto:[email protected]]
Sent: Friday, January 17, 2014 7:56 AM
To: [email protected]<mailto:[email protected]>
Subject: Using a secondary voice for speech messages

Allright, you developers,
Here is another tricky question. Hope someone has a solution, as it would have 
improved my projects a bit.

Le'ts for samplification say, that a user is operating his computer by 
Eloquence Reed. Like many users actually are doing.

My app sends a few messages to the synthesizer, like status messages in given 
operations, or resulting messages from other operations. Since they are all 
sent to the same synth, it is likely it only will drown in the amount of other 
activity the user is performing. And, he may not be able to discern what the 
message is about. This could specially be the case, when an app is performing 
some activity in the background, or in a separate window, and the user 
currently is reading a document. Now, if the status message from the app pops 
in, right in the middle of the document reading, either the user will think it 
may be part of the document, or he may experience a break-down of the speech 
altogether.

It would have many benefits, if the app could have its "internal" messages sent 
to another voice, available on the user's system. Now, with the
    Speak
command, in WE, is there a way to redirect the output to a secondary synth or 
voice? Or, is there a "back-door" workaround to this issue?

I greatly would welcome any ideas you all may have, and maybe you have implied 
any such solution in your apps already, and could please share some techniques? 
Thanks alot,


Reply via email to