Hi All,
there is a way to do what you want for the background and it involves using
the 3 voices you have on file. Below are the settings I use in the cuckoo Clock
program and can be used for only WE voices.
Since each voice is set you can set or change one of them in the background
for a momentary thing or event then change the voice back after you have used
it.
So if you are using the mouse all the time then use the keyboard voice or
which ever is not being used all the time at that given moment.
When using the Speak method you just add the extra parm at the end of the
command and you have a different voice which was set in the background for the
need at hand.
So you change the active setting for one of the 3 below and then change the
SpeakerVoice to that one you have set and the voice has changed.
I am sure there are other settings inside the active voices to silence,
stop momentarily and allow speech to continue on but have not looked at it to
be sure but suspect there is.
so David try these settings of the Speak command and see what you come up
with using the change dictionary and such I showed earlier.
I have not done anything at the moment since I was up all last not over a
simple Sapi problem with that class and as chip pointed out you have to check
which Sapi engine is in use at the time which just involves using a different
CreateObject setting but can be a pain like all Microsoft products...
This is what I did:
' The speakerVoice is used for all announcements and will be set to the mouse
voice.
Dim speakerVoice
Dim keyboardVoice: keyboardVoice = 0
Dim screenVoice: screenVoice = 1
Dim MouseVoice: mouseVoice = 2
speakerVoice = mouseVoice
Speak " It is time to wake up ", speakerVoice
Above is what I did at that moment to change a voice.
Bruce
Sent: Sunday, January 19, 2014 11:54 AM
Subject: RE: Using a secondary voice for speech messages
One thing to research is that MS has now updated SAPI with something
incompatible called the Microsoft Speech Platform, so now you've got 4 (WE,
sapi 4, sapi 5, and MS speech platform) types of speech engines to work with;
none of which will pause properly to work with the others. I really think it's
better to play with the current WE voice parameters, and ask GW if they'll
consider expanding the object model to let us do more with voices in the future.
Chip
From: LB [mailto:[email protected]]
Sent: Saturday, January 18, 2014 5:27 PM
To: [email protected]
Subject: Re: Using a secondary voice for speech messages
Hi David and Jeff,
I sent both of you a copy of the Sapi Class I wrote which also has the
Text to Speech in it to use if you need to. Also added the pitch setting which
Sapi does not have unless you use XML.
The other posting I have the change voice routine which takes advantage
of all settings. So enjoy and have fun using it.
Note: it also can use Sapi 4 and ignore it and keep that flag set as 5 just
in case you are wondering why I have that in there.
Bruce
Sent: Saturday, January 18, 2014 3:47 PM
Subject: Re: Using a secondary voice for speech messages
I had this as part of something else.
Sleep probably would be needed after the sapi speech to keep the
Window-eyes speech from talking over it.
Jeff Weiss
From: David
Sent: Saturday, January 18, 2014 2:42 PM
To: [email protected]
Subject: Re: Using a secondary voice for speech messages
Jeff,
Thanks. I was kind of considering the SAPI approach, but thought I would
ask the community for any suggested method, that I could take all into
consideration before making my final go for any solution.
Only thing in your sample, that I am questioning, is why you put a two
second long pause (sleep) ahead of your SAPI approach. This definitely would
slow down the software, wouldn't it? Or, do you have any particular reason for
putting your code to sleep, even before it has started to perform anything?
Just wanted to make sure what you are trying here. Thanks again,
----- Original Message -----
From: Jeff Weiss
To: [email protected]
Sent: Friday, January 17, 2014 6:00 PM
Subject: RE: Using a secondary voice for speech messages
You could use SAPI like this:
' SapiSpeak
wScript.Sleep 2000
strText = "Please wait while we get ready for lunch! "
Set objVoice = CreateObject("SAPI.SpVoice")
objVoice.Speak strText
hth
Jeff Weiss
From: Chip Orange [mailto:[email protected]]
Sent: Friday, January 17, 2014 10:41 AM
To: [email protected]
Subject: RE: Using a secondary voice for speech messages
Hi David,
I absolutely agree with you about the need for this. I've tried to add
it to a couple of my apps, but not had a lot of luck, because the WE object
model just doesn't support this idea.
It does support it just a little, in that the second parameter of the
speak method allows you to specify whether it should be spoken in the screen,
mouse, or keyboard voice settings. This requires the user to go and alter
these voice settings (which they may not want to do), and the only thing it
allows you to change is the basic pitch or tone of the synthesizer in use, but
you cannot use more than one synthesizer concurrently.
So, I tried to do this in my apps without requiring the user to alter the
keyboard, mouse, or screen voices, and even without changing synthes I ran into
the problem Jim brings up: there's some delay introduced even when staying with
the same synthesizer. If you try to change synthesizers the delay is
intolerable.
If you stay with the same synth, and you want this just to happen
automatically without requiring the user to go alter their basic 3 voice
settings, it's very difficult to determine which changes to speech parameters
will actually produce a noticib le change in the sound of the voice. You can
look at my app named Word Advanced Features for some code where I try to deal
with this automatically, but I've come to believe it may be better just to ask
the user to dedicate one of the mouse or keyboard voices to this function, and
then ask them to make the changes they wish, so that they will be able to
notice the difference in voice sound.
Hth,
Chip
From: David [mailto:[email protected]]
Sent: Friday, January 17, 2014 7:56 AM
To: [email protected]
Subject: Using a secondary voice for speech messages
Allright, you developers,
Here is another tricky question. Hope someone has a solution, as it would
have improved my projects a bit.
Le'ts for samplification say, that a user is operating his computer by
Eloquence Reed. Like many users actually are doing.
My app sends a few messages to the synthesizer, like status messages in
given operations, or resulting messages from other operations. Since they are
all sent to the same synth, it is likely it only will drown in the amount of
other activity the user is performing. And, he may not be able to discern what
the message is about. This could specially be the case, when an app is
performing some activity in the background, or in a separate window, and the
user currently is reading a document. Now, if the status message from the app
pops in, right in the middle of the document reading, either the user will
think it may be part of the document, or he may experience a break-down of the
speech altogether.
It would have many benefits, if the app could have its "internal"
messages sent to another voice, available on the user's system. Now, with the
Speak
command, in WE, is there a way to redirect the output to a secondary
synth or voice? Or, is there a "back-door" workaround to this issue?
I greatly would welcome any ideas you all may have, and maybe you have
implied any such solution in your apps already, and could please share some
techniques? Thanks alot,
------------------------------------------------------------------------------
This email is free from viruses and malware because avast! Antivirus
protection is active.
---
This email is free from viruses and malware because avast! Antivirus protection
is active.
http://www.avast.com