update from the  NaturallySpeaking in a VM project.

don't remember what I told you before but, yes I can now send keystroke events generated by speech recognition in the Windows guest into the Linux input queue. I can also extract information from the Linux side, and have it modify the grammar on the Windows side. The result of activating that grammar is that I can execute code on either side in response to speech recognition commands. it's fragile as all hell but I'm the only one using it so far. :-)

Latency is a bit longer than I like. USB and network connections break every time I come out of suspend part at least I don't have to use Windows all the time.

One thing is puzzling though. Windows, in idle, consume something like 15 to 20% CPU according to top. I turn on NaturallySpeaking, the utilization climbs to him roughly 30 to 40%. I turn on the microphone and utilization jumps up to 80-110%. In other words, it takes up a whole core.

I can live with it. I chalk it up to the cost of having a disability (a.k.a. cripple tax).

Hope my observations are useful and if you want me to monitor anything, let me know and I'll try to fit it into my daily routine.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to