Hi David and Jeff,

    I sent both of you a copy of the Sapi Class I wrote which also has the Text 
to Speech in it to use if you need to. Also added the pitch setting which Sapi 
does not have unless you use XML.

    The other posting I have the change voice routine which takes advantage of 
all settings. So enjoy and have fun using it.
Note: it also can use Sapi 4 and ignore it and keep that flag set as 5 just in 
case you are wondering why I have that in there.

        Bruce

  Sent: Saturday, January 18, 2014 3:47 PM
  Subject: Re: Using a secondary voice for speech messages


  I had this as part of something else.
  Sleep probably would be needed after the sapi speech to keep the Window-eyes 
speech from talking over it.
  Jeff Weiss



  From: David
  Sent: Saturday, January 18, 2014 2:42 PM
  To: [email protected]
  Subject: Re: Using a secondary voice for speech messages

  Jeff,
  Thanks. I was kind of considering the SAPI approach, but thought I would ask 
the community for any suggested method, that I could take all into 
consideration before making my final go for any solution.

  Only thing in your sample, that I am questioning, is why you put a two second 
long pause (sleep) ahead of your SAPI approach. This definitely would slow down 
the software, wouldn't it? Or, do you have any particular reason for putting 
your code to sleep, even before it has started to perform anything? Just wanted 
to make sure what you are trying here. Thanks again,

    ----- Original Message -----
    From: Jeff Weiss
    To: [email protected]
    Sent: Friday, January 17, 2014 6:00 PM
    Subject: RE: Using a secondary voice for speech messages


    You could use SAPI like this:



    ' SapiSpeak



    wScript.Sleep 2000



    strText = "Please wait while we get ready for lunch! "



    Set objVoice = CreateObject("SAPI.SpVoice")

    objVoice.Speak strText



    hth

    Jeff Weiss





    From: Chip Orange [mailto:[email protected]]
    Sent: Friday, January 17, 2014 10:41 AM
    To: [email protected]
    Subject: RE: Using a secondary voice for speech messages



    Hi David,



    I absolutely agree with you about the need for this.  I've tried to add it 
to a couple of my apps, but not had a lot of luck, because the WE object model 
just doesn't support this idea.



    It does support it just a little, in that the second parameter of the speak 
method allows you to specify whether it should be spoken in the screen, mouse, 
or keyboard voice settings.  This requires the user to go and alter these voice 
settings (which they may not want to do), and the only thing it allows you to 
change is the basic pitch or tone of the synthesizer in use, but you cannot use 
more than one synthesizer concurrently.



    So, I tried to do this in my apps without requiring the user to alter the 
keyboard, mouse, or screen voices, and even without changing synthes I ran into 
the problem Jim brings up: there's some delay introduced even when staying with 
the same synthesizer.  If you try to change synthesizers the delay is 
intolerable.



    If you stay with the same synth, and you want this just to happen 
automatically without requiring the user to go alter their basic 3 voice 
settings, it's very difficult to determine which changes to speech parameters 
will actually produce a noticib le change in the sound of the voice.  You can 
look at my app named Word Advanced Features for some code where I try to deal 
with this automatically, but I've come to believe it may be better just to ask 
the user to dedicate one of the mouse or keyboard voices to this function, and 
then ask them to make the changes they wish, so that they will be able to 
notice the difference in voice sound.



    Hth,



    Chip







    From: David [mailto:[email protected]]
    Sent: Friday, January 17, 2014 7:56 AM
    To: [email protected]
    Subject: Using a secondary voice for speech messages



    Allright, you developers,

    Here is another tricky question. Hope someone has a solution, as it would 
have improved my projects a bit.



    Le'ts for samplification say, that a user is operating his computer by 
Eloquence Reed. Like many users actually are doing.



    My app sends a few messages to the synthesizer, like status messages in 
given operations, or resulting messages from other operations. Since they are 
all sent to the same synth, it is likely it only will drown in the amount of 
other activity the user is performing. And, he may not be able to discern what 
the message is about. This could specially be the case, when an app is 
performing some activity in the background, or in a separate window, and the 
user currently is reading a document. Now, if the status message from the app 
pops in, right in the middle of the document reading, either the user will 
think it may be part of the document, or he may experience a break-down of the 
speech altogether.



    It would have many benefits, if the app could have its "internal" messages 
sent to another voice, available on the user's system. Now, with the

        Speak

    command, in WE, is there a way to redirect the output to a secondary synth 
or voice? Or, is there a "back-door" workaround to this issue?



    I greatly would welcome any ideas you all may have, and maybe you have 
implied any such solution in your apps already, and could please share some 
techniques? Thanks alot,






---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com

Reply via email to