Re: just an observation about USB

2015-10-19 Thread Gerd Hoffmann
On Fr, 2015-10-16 at 11:48 -0400, Eric S. Johansson wrote:
> 
> On 10/16/2015 07:55 AM, Stefan Hajnoczi wrote:
> > QEMU can emulate PCI soundcards, including the Intel HD Audio codec 
> > cards (-device intel-hda or -soundhw hda might do the trick). Low 
> > latency and power consumption are usually at odds with each other. 
> > That's because real-time audio requires small buffers many times per 
> > second, so lots of interrupts and power consumption. Anyway, PCI 
> > should be an improvement from USB audio. Stefan 
> 
> I set it up with ich9.  I switched the default audio to my headset. I 
> hear the windows startup sound in the headset. Dragon reports that the 
> mic is not plugged in.  I can see the audio level move in the sound 
> settings so I know the host is hearing the audio
> 
> what should I look at next?

Try '-device intel-hda -device hda-micro' (instead of -device intel-hda
-device hda-duplex', or '-soundhw hda' which is a shortcut for the
latter).

'hda-duplex' presents a codec with line-in and line-out to the guest.
'hda-micro' presents a codec with microphone and speaker to the guest.
Other than having in and out tagged differently the codecs are
identical.  But especially declaring the input being a mic seems to be
needed to make some picky windows software happy.

cheers,
  Gerd


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: just an observation about USB

2015-10-16 Thread Stefan Hajnoczi
On Wed, Oct 14, 2015 at 04:30:22PM -0400, Eric S. Johansson wrote:
> On 10/14/2015 04:04 PM, Paolo Bonzini wrote:
> >On 14/10/2015 21:39, Eric S. Johansson wrote:
> >>Latency is a bit longer than I like. USB and network connections break
> >>every time I come out of suspend part at least I don't have to use
> >>Windows all the time.
> >>
> >>  One thing is puzzling though. Windows, in idle, consume something like
> >>15 to 20% CPU according to top. I turn on NaturallySpeaking, the
> >>utilization climbs to him roughly 30 to 40%. I turn on the microphone
> >>and utilization jumps up to 80-110%.  In other words, it takes up a
> >>whole core.
> >USB is really expensive because it's all done through polling.  Do that
> >in hardware, and your computer is a bit hotter; do that in software
> >(that's what VMs do) and your computer doubles as a frying pan.
> >
> >If you have USB3 drivers in Windows, you can try using a USB3
> >controller.  But it's probably going to waste a lot of processing power
> >too, because USB audio uses a lot of small packets, making it basically
> >the worst case.
> 
>  Okay, then let's try to solve this a different way. What's the cleanest,
> lowest latency way of delivering audio to a virtual machine that doesn't use
> USB in the virtual machine?

QEMU can emulate PCI soundcards, including the Intel HD Audio codec
cards (-device intel-hda or -soundhw hda might do the trick).

Low latency and power consumption are usually at odds with each other.
That's because real-time audio requires small buffers many times per
second, so lots of interrupts and power consumption.

Anyway, PCI should be an improvement from USB audio.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: just an observation about USB

2015-10-16 Thread Eric S. Johansson



On 10/16/2015 07:55 AM, Stefan Hajnoczi wrote:
QEMU can emulate PCI soundcards, including the Intel HD Audio codec 
cards (-device intel-hda or -soundhw hda might do the trick). Low 
latency and power consumption are usually at odds with each other. 
That's because real-time audio requires small buffers many times per 
second, so lots of interrupts and power consumption. Anyway, PCI 
should be an improvement from USB audio. Stefan 


I set it up with ich9.  I switched the default audio to my headset. I 
hear the windows startup sound in the headset. Dragon reports that the 
mic is not plugged in.  I can see the audio level move in the sound 
settings so I know the host is hearing the audio


what should I look at next?

--- eric
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


just an observation about USB

2015-10-14 Thread Eric S. Johansson

 update from the  NaturallySpeaking in a VM project.

 don't remember what I told you before but, yes I can now send 
keystroke events generated by speech recognition in the Windows guest 
into the Linux input queue. I can also extract information from the 
Linux side, and have it modify the grammar on the Windows side. The 
result of  activating that grammar  is that I can execute code on either 
side  in response to speech recognition commands. it's fragile as all 
hell but I'm the only one using it so far. :-)


Latency is a bit longer than I like. USB and network connections break 
every time I come out of suspend part at least I don't have to use 
Windows all the time.


 One thing is puzzling though. Windows, in idle, consume something like 
15 to 20% CPU according to top. I turn on NaturallySpeaking, the 
utilization climbs to him roughly 30 to 40%. I turn on the microphone 
and utilization jumps up to 80-110%.  In other words, it takes up a 
whole core.


I can live with it. I chalk it up to the cost of having a disability 
(a.k.a. cripple tax).


 Hope my observations are useful and if you want me to monitor 
anything, let me know and I'll try to fit it into my daily routine.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: just an observation about USB

2015-10-14 Thread Paolo Bonzini


On 14/10/2015 21:39, Eric S. Johansson wrote:
>  update from the  NaturallySpeaking in a VM project.
> 
>  don't remember what I told you before but, yes I can now send keystroke
> events generated by speech recognition in the Windows guest into the
> Linux input queue. I can also extract information from the Linux side,
> and have it modify the grammar on the Windows side. The result of 
> activating that grammar  is that I can execute code on either side  in
> response to speech recognition commands. it's fragile as all hell but
> I'm the only one using it so far. :-)

That's awesome!  What was the problem?

> Latency is a bit longer than I like. USB and network connections break
> every time I come out of suspend part at least I don't have to use
> Windows all the time.
> 
>  One thing is puzzling though. Windows, in idle, consume something like
> 15 to 20% CPU according to top. I turn on NaturallySpeaking, the
> utilization climbs to him roughly 30 to 40%. I turn on the microphone
> and utilization jumps up to 80-110%.  In other words, it takes up a
> whole core.

USB is really expensive because it's all done through polling.  Do that
in hardware, and your computer is a bit hotter; do that in software
(that's what VMs do) and your computer doubles as a frying pan.

If you have USB3 drivers in Windows, you can try using a USB3
controller.  But it's probably going to waste a lot of processing power
too, because USB audio uses a lot of small packets, making it basically
the worst case.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: just an observation about USB

2015-10-14 Thread Eric S. Johansson



On 10/14/2015 04:04 PM, Paolo Bonzini wrote:


On 14/10/2015 21:39, Eric S. Johansson wrote:

  update from the  NaturallySpeaking in a VM project.

  don't remember what I told you before but, yes I can now send keystroke
events generated by speech recognition in the Windows guest into the
Linux input queue. I can also extract information from the Linux side,
and have it modify the grammar on the Windows side. The result of
activating that grammar  is that I can execute code on either side  in
response to speech recognition commands. it's fragile as all hell but
I'm the only one using it so far. :-)

That's awesome!  What was the problem?
  I would have to say the most the problems were because I just didn't 
know enough. Once I found the right people and gained a bit more 
knowledge about subsystems I never touched, it came together pretty easily.


 I'm living with this for a while to get a feel for what I need to do 
next. It looks like the 2 things that would be most important are 
communicating window status (i.e. is it in a text area or not) and 
trying to create something like Select-and-Say without really using it 
because Nuance isn't talking about how to make it work.


 The 1st is important so that I can know when to dump keystrokes from 
inappropriate recognition. For example, using Thunderbird. You only want 
generalized dictation in text regions like creating this email. You 
don't want it happening when you're someplace where keystroke commands 
are active such as the navigation windows. Let me tell you, I have lost 
more email to miss recognition errors at the wrong time than any other time.


the 2nd is important to enable correction and speech driven editing.





Latency is a bit longer than I like. USB and network connections break
every time I come out of suspend part at least I don't have to use
Windows all the time.

  One thing is puzzling though. Windows, in idle, consume something like
15 to 20% CPU according to top. I turn on NaturallySpeaking, the
utilization climbs to him roughly 30 to 40%. I turn on the microphone
and utilization jumps up to 80-110%.  In other words, it takes up a
whole core.

USB is really expensive because it's all done through polling.  Do that
in hardware, and your computer is a bit hotter; do that in software
(that's what VMs do) and your computer doubles as a frying pan.

If you have USB3 drivers in Windows, you can try using a USB3
controller.  But it's probably going to waste a lot of processing power
too, because USB audio uses a lot of small packets, making it basically
the worst case.


 Okay, then let's try to solve this a different way. What's the 
cleanest, lowest latency way of delivering audio to a virtual machine 
that doesn't use USB in the virtual machine?


I will say that my experience here and this note about USB explaining 
why my laptop gets so hot reinforces were I want to go with this model 
of accessibility tools. It's nice to be able to make this happen in a VM 
but, I think the better solution is to keep all of the accessibility 
tools such as speech recognition or text-to-speech in a tablet like 
device so you can dedicate all of the horsepower as well as carry all 
the accessibility interface in a dedicated platform. Then, it should be 
relatively simple[1]  to put a small  bit of software on the machine 
where you do your work and make that box accessible to disabled user.


I've  simulated this with 2 laptops and it worked really well, much 
better than with a virtual machine. The challenge is, finding a suitable 
secondary device that can run Windows and NaturallySpeaking plus 
whatever,  that isn't too large, too expensive, or too slow.


http://nuance.custhelp.com/app/answers/detail/a_id/16262/~/system-requirements-for-dragon-naturallyspeaking-13

from past experience, I can tell you that the specs are good for at 
least 2 releases as long as you are running nothing else on that machine.


--- eric

[1]  you can stop laughing now. :-)


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html