Re: just an observation about USB

2015-10-16 Thread Eric S. Johansson



On 10/16/2015 07:55 AM, Stefan Hajnoczi wrote:
QEMU can emulate PCI soundcards, including the Intel HD Audio codec 
cards (-device intel-hda or -soundhw hda might do the trick). Low 
latency and power consumption are usually at odds with each other. 
That's because real-time audio requires small buffers many times per 
second, so lots of interrupts and power consumption. Anyway, PCI 
should be an improvement from USB audio. Stefan 


I set it up with ich9.  I switched the default audio to my headset. I 
hear the windows startup sound in the headset. Dragon reports that the 
mic is not plugged in.  I can see the audio level move in the sound 
settings so I know the host is hearing the audio


what should I look at next?

--- eric
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


just an observation about USB

2015-10-14 Thread Eric S. Johansson

 update from the  NaturallySpeaking in a VM project.

 don't remember what I told you before but, yes I can now send 
keystroke events generated by speech recognition in the Windows guest 
into the Linux input queue. I can also extract information from the 
Linux side, and have it modify the grammar on the Windows side. The 
result of  activating that grammar  is that I can execute code on either 
side  in response to speech recognition commands. it's fragile as all 
hell but I'm the only one using it so far. :-)


Latency is a bit longer than I like. USB and network connections break 
every time I come out of suspend part at least I don't have to use 
Windows all the time.


 One thing is puzzling though. Windows, in idle, consume something like 
15 to 20% CPU according to top. I turn on NaturallySpeaking, the 
utilization climbs to him roughly 30 to 40%. I turn on the microphone 
and utilization jumps up to 80-110%.  In other words, it takes up a 
whole core.


I can live with it. I chalk it up to the cost of having a disability 
(a.k.a. cripple tax).


 Hope my observations are useful and if you want me to monitor 
anything, let me know and I'll try to fit it into my daily routine.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: just an observation about USB

2015-10-14 Thread Eric S. Johansson



On 10/14/2015 04:04 PM, Paolo Bonzini wrote:


On 14/10/2015 21:39, Eric S. Johansson wrote:

  update from the  NaturallySpeaking in a VM project.

  don't remember what I told you before but, yes I can now send keystroke
events generated by speech recognition in the Windows guest into the
Linux input queue. I can also extract information from the Linux side,
and have it modify the grammar on the Windows side. The result of
activating that grammar  is that I can execute code on either side  in
response to speech recognition commands. it's fragile as all hell but
I'm the only one using it so far. :-)

That's awesome!  What was the problem?
  I would have to say the most the problems were because I just didn't 
know enough. Once I found the right people and gained a bit more 
knowledge about subsystems I never touched, it came together pretty easily.


 I'm living with this for a while to get a feel for what I need to do 
next. It looks like the 2 things that would be most important are 
communicating window status (i.e. is it in a text area or not) and 
trying to create something like Select-and-Say without really using it 
because Nuance isn't talking about how to make it work.


 The 1st is important so that I can know when to dump keystrokes from 
inappropriate recognition. For example, using Thunderbird. You only want 
generalized dictation in text regions like creating this email. You 
don't want it happening when you're someplace where keystroke commands 
are active such as the navigation windows. Let me tell you, I have lost 
more email to miss recognition errors at the wrong time than any other time.


the 2nd is important to enable correction and speech driven editing.





Latency is a bit longer than I like. USB and network connections break
every time I come out of suspend part at least I don't have to use
Windows all the time.

  One thing is puzzling though. Windows, in idle, consume something like
15 to 20% CPU according to top. I turn on NaturallySpeaking, the
utilization climbs to him roughly 30 to 40%. I turn on the microphone
and utilization jumps up to 80-110%.  In other words, it takes up a
whole core.

USB is really expensive because it's all done through polling.  Do that
in hardware, and your computer is a bit hotter; do that in software
(that's what VMs do) and your computer doubles as a frying pan.

If you have USB3 drivers in Windows, you can try using a USB3
controller.  But it's probably going to waste a lot of processing power
too, because USB audio uses a lot of small packets, making it basically
the worst case.


 Okay, then let's try to solve this a different way. What's the 
cleanest, lowest latency way of delivering audio to a virtual machine 
that doesn't use USB in the virtual machine?


I will say that my experience here and this note about USB explaining 
why my laptop gets so hot reinforces were I want to go with this model 
of accessibility tools. It's nice to be able to make this happen in a VM 
but, I think the better solution is to keep all of the accessibility 
tools such as speech recognition or text-to-speech in a tablet like 
device so you can dedicate all of the horsepower as well as carry all 
the accessibility interface in a dedicated platform. Then, it should be 
relatively simple[1]  to put a small  bit of software on the machine 
where you do your work and make that box accessible to disabled user.


I've  simulated this with 2 laptops and it worked really well, much 
better than with a virtual machine. The challenge is, finding a suitable 
secondary device that can run Windows and NaturallySpeaking plus 
whatever,  that isn't too large, too expensive, or too slow.


http://nuance.custhelp.com/app/answers/detail/a_id/16262/~/system-requirements-for-dragon-naturallyspeaking-13

from past experience, I can tell you that the specs are good for at 
least 2 releases as long as you are running nothing else on that machine.


--- eric

[1]  you can stop laughing now. :-)


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


revisiting speech recognition in a Windows guest on KVM

2015-02-02 Thread Eric S. Johansson
Apologies for taking so long to revisit this topic but well, new 
hardware, new software, new microphone and, you know, life…


Summary:
I have unable to forward my USB microphone connection to a Windows guest 
and I could use a little bit of assistance pointing me in the right 
direction.


Long story:
On my new laptop, speech recognition is working well on the Windows 
guest. I don't remember what client I was working with but I'm now using 
vmm-view in spice mode. I wasn't able to get the spice connection to 
make the USB microphone interface visible in Windows but I was able to 
get the built-in host to guest USB binding as supplied by the vmm manger.


It's almost exactly right. What I'm noticing is that the audio stream is 
not as clean as it could be. About every 15 to 20 seconds I hear a very 
quiet but distinct thump. The reason this is a bad thing is because 
the recognition engine thinks it's the start of an utterance, spins its 
wheels for about 10+ seconds trying to decode what it hears. When 
NaturallySpeaking is spinning its wheels, it kills off all input because 
it is trying to avoid a race condition between other inputs and itself.


I'm not sure where the thump is coming from. I don't hear it on my other 
laptop and it's not external noise because it's extremely regular.


I'm wondering if the same problem would occur over the spice USB 
forwarding. I could use a bit of assistance figuring out what I'm doing 
wrong with spice USB forwarding so I can test out importing audio that way.


Thanks
--- eric



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: usb audio device troubles

2014-12-03 Thread Eric S. Johansson


On 12/3/2014 3:21 AM, Hans de Goede wrote:

Hi all,

On 12/02/2014 01:43 PM, Paolo Bonzini wrote:



On 02/12/2014 13:16, Eric S. Johansson wrote:
I got win7 installed, virtio devices working and took forever to 
trickle

in updates because of a w7 bug update manager bug that take up all cpu
resources.  now I got DNS 13 installed but I'm getting no audio.

I pass throught the usb audio device (logitech h800 USB 046d:0a29) and
it is seen as a device in windows.  then I hear the headset sync-up
beeps and the device vanishes from windows.  pointers as to what I
should look at next?


Adding back Hans and Gerd...


Eric are you using usb-host redirection, or Spice's usb network redir ?


Host redirection I assume. It was from the collection of devices UI and 
I added the device to pass through from the list of host USB devices.



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: usb audio device troubles

2014-12-03 Thread Eric S. Johansson


On 12/3/2014 3:52 AM, Hans de Goede wrote:

Eric are you using usb-host redirection, or Spice's usb network redir ?


Host redirection I assume. It was from the collection of devices UI 
and I added the device to pass through from the list of host USB 
devices.


Ok, then Gerd is probably the best person to help you further


Let's try this from a different perspective. Which connection method 
will give me the most reliable/stable USB connection with the cleanest 
audio outcomes? whatever would work best is what I want to use.


--- eric

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: usb audio device troubles

2014-12-03 Thread Eric S. Johansson


On 12/3/2014 3:52 AM, Hans de Goede wrote:

Eric are you using usb-host redirection, or Spice's usb network redir ?



This little bit of time this morning learning about spice and the 
network redirection. It worked for about half an hour and then failed in 
the same way the host redirection failed. The audio device would appear 
for a while, I would try to use it and then it would disappear.


The spice model has some very nice features and that I could, in 
theory,  have a  working speech recognition engine somewhere on my air 
quotescloud/air quotes and then be able to use it via spice on any 
desktop I happen to be located in front of. it would also work nicely 
with my original idea of putting a working KVM virtual machine on and an 
e-sata SSD external drive and be able to bring my working speech 
recognition environment with me without having cart a laptop.


I hope you can see that this could be generalized into a nicely portable 
accessibility solution where the accessibility environment moves with 
the disabled user and removes the need to make every machine have user 
specific accessibility software and configuration. Yes, it does impose a 
requirement the KVM runs everywhere but, we know that's the future 
anyway so why fight it :-)


Anyway, I think if we can solve this USB audio device problem then I'll 
be very happy and can make further progress towards my goal.


Thank you so very much for the help so far and I hope we can fix this 
USB problem.


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


usb audio device troubles

2014-12-02 Thread Eric S. Johansson
I got win7 installed, virtio devices working and took forever to trickle 
in updates because of a w7 bug update manager bug that take up all cpu 
resources.  now I got DNS 13 installed but I'm getting no audio.


I pass throught the usb audio device (logitech h800 USB 046d:0a29) and 
it is seen as a device in windows.  then I hear the headset sync-up 
beeps and the device vanishes from windows.  pointers as to what I 
should look at next?


--- eric

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: can I make this work… (Foundation for accessibility project)

2014-11-21 Thread Eric S. Johansson


On 11/21/2014 09:06 AM, Paolo Bonzini wrote:


On 20/11/2014 23:22, Eric S. Johansson wrote:

I'll be able to run some tests in about 2 to 3 hours after I finish this
document. Let me know what I should look at?  on a side note, a pointer
to an automated install process would be wonderful.

GNOME Boxes can pretty much automate the install process.

Can you just run ps aux while the install is running and send the result?
I went back and verified I had installed all packages.  apparently I 
missed a few updates.  also I was more familiar with the UI tool.  I 
noticed a few places where kvm was now an option.  last I made a copy of 
the dvd to an iso as an install image.  end result is *wow* much 
faster.  I now have hope that my project will work.  sure does like 
giving 110% in cpu speed.


4384 libvirt+  20   0 2825112 2.058g   9960 R 109.1 26.6  12:47.73 
qemu-system-x86


next report after updates install

btw, would you like a better UI design for a management tool?  I have 
some ideas but would need someone with hands to put it together.


--- eric

top sez

Tasks: 182 total,   4 running, 178 sleeping,   0 stopped,   0 zombie
%Cpu(s): 44.2 us, 14.9 sy,  0.0 ni, 38.7 id,  2.0 wa,  0.0 hi,  0.2 si,  
0.0 st

KiB Mem:   8128204 total,  4750320 used,  3377884 free,54476 buffers
KiB Swap:  8338428 total,0 used,  8338428 free.  1996164 cached Mem

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ 
COMMAND
 4384 libvirt+  20   0 2634992 2.033g   9940 R 108.6 26.2   2:02.83 
qemu-syste+

 2668 eric  20   0 1284184  66308  29828 S   2.3  0.8   0:21.50 compiz
 1314 root  20   0 1032288  22264  11436 S   2.0  0.3   0:46.29 
libvirtd
   18 root  20   0   0  0  0 S   1.7  0.0   0:00.96 
kworker/1:0

 1423 root  20   0  410736  49196  35228 S   1.7  0.6   0:32.18 Xorg
 4694 root  20   0   0  0  0 R   1.7  0.0   0:00.20 
kworker/0:1

 2837 eric  20   0 1481612 102828  38476 S   1.0  1.3   0:54.03 python
 2628 eric  20   0   20232940768 S   0.3  0.0   0:00.69 
syndaemon
 3047 eric  20   0  653160  20868  12472 S   0.3  0.3   0:02.14 
gnome-term+
 3147 eric  20   0  377868   4168   3288 S   0.3  0.1   0:00.04 
deja-dup-m+

1 root  20   0   33908   3280   1472 S   0.0  0.0   0:01.62 init
2 root  20   0   0  0  0 S   0.0  0.0   0:00.00 
kthreadd
3 root  20   0   0  0  0 S   0.0  0.0   0:00.16 
ksoftirqd/0
4 root  20   0   0  0  0 S   0.0  0.0   0:00.72 
kworker/0:0
5 root   0 -20   0  0  0 S   0.0  0.0   0:00.00 
kworker/0:+
7 root  20   0   0  0  0 S   0.0  0.0   0:00.50 
rcu_sched
8 root  20   0   0  0  0 R   0.0  0.0   0:00.40 
rcuos/0

eric@garnet:~$
ps aux sez

eric@garnet:~$ ps -aux
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root 1  0.1  0.0  33908  3280 ?Ss   11:12   0:01 /sbin/init
root 2  0.0  0.0  0 0 ?S11:12   0:00 [kthreadd]
root 3  0.0  0.0  0 0 ?S11:12   0:00 
[ksoftirqd/0]
root 4  0.0  0.0  0 0 ?S11:12   0:00 
[kworker/0:0]
root 5  0.0  0.0  0 0 ?S   11:12   0:00 
[kworker/0:0H]

root 7  0.0  0.0  0 0 ?S11:12   0:00 [rcu_sched]
root 8  0.0  0.0  0 0 ?S11:12   0:00 [rcuos/0]
root 9  0.0  0.0  0 0 ?S11:12   0:00 [rcuos/1]
root10  0.0  0.0  0 0 ?S11:12   0:00 [rcu_bh]
root11  0.0  0.0  0 0 ?S11:12   0:00 [rcuob/0]
root12  0.0  0.0  0 0 ?S11:12   0:00 [rcuob/1]
root13  0.0  0.0  0 0 ?S11:12   0:00 
[migration/0]
root14  0.0  0.0  0 0 ?S11:12   0:00 
[watchdog/0]
root15  0.0  0.0  0 0 ?S11:12   0:00 
[watchdog/1]
root16  0.0  0.0  0 0 ?S11:12   0:00 
[migration/1]
root17  0.0  0.0  0 0 ?S11:12   0:00 
[ksoftirqd/1]
root18  0.0  0.0  0 0 ?S11:12   0:01 
[kworker/1:0]
root19  0.0  0.0  0 0 ?S   11:12   0:00 
[kworker/1:0H]

root20  0.0  0.0  0 0 ?S   11:12   0:00 [khelper]
root21  0.0  0.0  0 0 ?S11:12   0:00 [kdevtmpfs]
root22  0.0  0.0  0 0 ?S   11:12   0:00 [netns]
root23  0.0  0.0  0 0 ?S   11:12   0:00 [writeback]
root24  0.0  0.0  0 0 ?S   11:12   0:00 
[kintegrityd]

root25  0.0  0.0  0 0 ?S   11:12   0:00 [bioset]
root26  0.0  0.0  0 0 ?S   11:12   0:00 
[kworker/u5:0]

root27  0.0  0.0  0 0 ?S   11:12   0:00 [kblockd]
root28  0.0  0.0  0 0 ?S   11:12   0:00

next puzzle: Re: can I make this work… (Foundation for accessibility project)

2014-11-21 Thread Eric S. Johansson


On 11/21/2014 11:52 AM, Eric S. Johansson wrote:



4384 libvirt+  20   0 2825112 2.058g   9960 R 109.1 26.6  12:47.73 
qemu-system-x86


next report after updates install


next puzzle. updates are not working
using bridged to eth0
using virt io driver (checked install on windows)
browser works in vm (quite well in fact)
watching output of tcpdump

and there is no apparent traffic for updates.

any ideas?


btw, would you like a better UI design for a management tool?  I have 
some ideas but would need someone with hands to put it together.


--- eric

top sez

Tasks: 182 total,   4 running, 178 sleeping,   0 stopped,   0 zombie
%Cpu(s): 44.2 us, 14.9 sy,  0.0 ni, 38.7 id,  2.0 wa,  0.0 hi, 0.2 
si,  0.0 st

KiB Mem:   8128204 total,  4750320 used,  3377884 free,54476 buffers
KiB Swap:  8338428 total,0 used,  8338428 free.  1996164 
cached Mem


  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
 4384 libvirt+  20   0 2634992 2.033g   9940 R 108.6 26.2 2:02.83 
qemu-syste+

 2668 eric  20   0 1284184  66308  29828 S   2.3  0.8 0:21.50 compiz
 1314 root  20   0 1032288  22264  11436 S   2.0  0.3 0:46.29 
libvirtd
   18 root  20   0   0  0  0 S   1.7  0.0 0:00.96 
kworker/1:0

 1423 root  20   0  410736  49196  35228 S   1.7  0.6 0:32.18 Xorg
 4694 root  20   0   0  0  0 R   1.7  0.0 0:00.20 
kworker/0:1

 2837 eric  20   0 1481612 102828  38476 S   1.0  1.3 0:54.03 python
 2628 eric  20   0   20232940768 S   0.3  0.0 0:00.69 
syndaemon
 3047 eric  20   0  653160  20868  12472 S   0.3  0.3 0:02.14 
gnome-term+
 3147 eric  20   0  377868   4168   3288 S   0.3  0.1 0:00.04 
deja-dup-m+

1 root  20   0   33908   3280   1472 S   0.0  0.0 0:01.62 init
2 root  20   0   0  0  0 S   0.0  0.0 0:00.00 
kthreadd
3 root  20   0   0  0  0 S   0.0  0.0 0:00.16 
ksoftirqd/0
4 root  20   0   0  0  0 S   0.0  0.0 0:00.72 
kworker/0:0
5 root   0 -20   0  0  0 S   0.0  0.0 0:00.00 
kworker/0:+
7 root  20   0   0  0  0 S   0.0  0.0 0:00.50 
rcu_sched

8 root  20   0   0  0  0 R   0.0  0.0 0:00.40 rcuos/0
eric@garnet:~$
ps aux sez

eric@garnet:~$ ps -aux
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root 1  0.1  0.0  33908  3280 ?Ss   11:12   0:01 
/sbin/init
root 2  0.0  0.0  0 0 ?S11:12   0:00 
[kthreadd]
root 3  0.0  0.0  0 0 ?S11:12   0:00 
[ksoftirqd/0]
root 4  0.0  0.0  0 0 ?S11:12   0:00 
[kworker/0:0]
root 5  0.0  0.0  0 0 ?S   11:12 0:00 
[kworker/0:0H]
root 7  0.0  0.0  0 0 ?S11:12   0:00 
[rcu_sched]
root 8  0.0  0.0  0 0 ?S11:12   0:00 
[rcuos/0]
root 9  0.0  0.0  0 0 ?S11:12   0:00 
[rcuos/1]

root10  0.0  0.0  0 0 ?S11:12   0:00 [rcu_bh]
root11  0.0  0.0  0 0 ?S11:12   0:00 
[rcuob/0]
root12  0.0  0.0  0 0 ?S11:12   0:00 
[rcuob/1]
root13  0.0  0.0  0 0 ?S11:12   0:00 
[migration/0]
root14  0.0  0.0  0 0 ?S11:12   0:00 
[watchdog/0]
root15  0.0  0.0  0 0 ?S11:12   0:00 
[watchdog/1]
root16  0.0  0.0  0 0 ?S11:12   0:00 
[migration/1]
root17  0.0  0.0  0 0 ?S11:12   0:00 
[ksoftirqd/1]
root18  0.0  0.0  0 0 ?S11:12   0:01 
[kworker/1:0]
root19  0.0  0.0  0 0 ?S   11:12 0:00 
[kworker/1:0H]

root20  0.0  0.0  0 0 ?S   11:12 0:00 [khelper]
root21  0.0  0.0  0 0 ?S11:12   0:00 
[kdevtmpfs]

root22  0.0  0.0  0 0 ?S   11:12 0:00 [netns]
root23  0.0  0.0  0 0 ?S   11:12 0:00 
[writeback]
root24  0.0  0.0  0 0 ?S   11:12 0:00 
[kintegrityd]

root25  0.0  0.0  0 0 ?S   11:12 0:00 [bioset]
root26  0.0  0.0  0 0 ?S   11:12 0:00 
[kworker/u5:0]

root27  0.0  0.0  0 0 ?S   11:12 0:00 [kblockd]
root28  0.0  0.0  0 0 ?S   11:12 0:00 [ata_sff]
root29  0.0  0.0  0 0 ?S11:12   0:00 [khubd]
root30  0.0  0.0  0 0 ?S   11:12 0:00 [md]
root31  0.0  0.0  0 0 ?S   11:12 0:00 
[devfreq_wq]
root34  0.0  0.0  0 0 ?S11:12   0:00 
[khungtaskd]
root35  0.0  0.0  0 0 ?S11:12   0:00 
[kswapd0]

root36  0.1  0.0  0 0 ?SN   11:12   0:02 [ksmd]
root37  0.0  0.0  0 0 ?SN   11:12   0:00 
[khugepaged]
root38  0.0  0.0

Re: next puzzle: Re: can I make this work… (Foundation for accessibility project)

2014-11-21 Thread Eric S. Johansson

a little more info

On 11/21/2014 01:24 PM, Eric S. Johansson wrote:

next puzzle. updates are not working
using bridged to eth0
using virt io driver (checked install on windows)
browser works in vm (quite well in fact)
watching output of tcpdump

and there is no apparent traffic for updates.


in resource manager, svchost.exe (netsvcs) is running at 100%

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: can I make this work… (Foundation for accessibility project)

2014-11-20 Thread Eric S. Johansson


On 11/18/2014 9:57 AM, Eric S. Johansson wrote:


That's great to know. I will spin up a version of Windows 7 and give 
it a try given that I'm not looking at it, I can strip it down to the 
barest user interface elements and improve performance significantly.


I tried it and it took me approximately 10 to 12 hours to install 
Windows 7 twice and I didn't even finish installing the last time.


Here's what happened. The first time I installed it, it was a naïve 
install. Took all the defaults just set up the ISO and let the install 
run. Then I installed all the updates. Hours went by and it kind of came 
up and ran but then I tried to install the virt I/O drivers and the 
Windows installation lost its mind. Did some reading on how to make 
performance better and on using the virtio drivers in windows.


So I start of the second install, same size disk 25 GB, same amount of 
RAM, 1 GB and installed the ethernet, disk and balloon drivers at the 
right time. I also changed the cache to none, I/O something to native 
and I think that's about it. Anyway, that was not really any 
improvement. It's still was incredibly slow and this time it was 
complaining about running out of memory and packages install never 
finished. Just kept going and going going.  iptraf reported network io 
ranging from 3kbit to 100kbit range when the updates were running.


I'm accustomed to lesser performance on virtual machines. That's the 
hazard of a running on old and slow laptop  (dell e6400 (2.2ghz core 
duo, 8gb ram)[1]) but even virtual box is not this slow.  So what am I 
doing wrong? It would be nice to use a slow machine like this as many 
handcrips don't have a whole lot of resources for buying newer/faster 
machines. On the other hand, many of them use desktops and work from one 
place whereas someone like me is all over the map (quite literally).


--- eric
[1] Part of the reason I don't bother upgrading machines all that often 
is because it no matter how fast the CPU runs or how much memory I have, 
Windows always runs about the same speed.



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: can I make this work… (Foundation for accessibility project)

2014-11-20 Thread Eric S. Johansson


On 11/20/2014 4:48 PM, Paolo Bonzini wrote:


On 20/11/2014 17:28, Eric S. Johansson wrote:

I'm accustomed to lesser performance on virtual machines. That's the
hazard of a running on old and slow laptop  (dell e6400 (2.2ghz core
duo, 8gb ram)[1]) but even virtual box is not this slow.  So what am I
doing wrong? It would be nice to use a slow machine like this as many
handcrips don't have a whole lot of resources for buying newer/faster
machines. On the other hand, many of them use desktops and work from one
place whereas someone like me is all over the map (quite literally).

How did you start the virtual machine?  Perhaps you're not using KVM but
emulation?  I have a fast machine but slow disk (a NAS on 100 MBit
ethernet) and I can do about 15 automated installations in less than 6
hours.

Are you using libvirt or directly invoking QEMU?


I was using one of the GUIs ( less hand stress than trying to assemble a 
commandline). Unfortunately I'm in Windows 8 right now because I'm 
writing. I'm very sure the GUI was http://virt-manager.org/  I tried a 
different one but it kept telling me I only had QEMU I thought silly 
program, that can't be right. Someday I will not argue with software or 
small electronic boxes. They don't care who wins and they are much more 
stubborn than I am.


I'll be able to run some tests in about 2 to 3 hours after I finish this 
document. Let me know what I should look at?  on a side note, a pointer 
to an automated install process would be wonderful.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: can I make this work… (Foundation for accessibility project)

2014-11-18 Thread Eric S. Johansson


On 11/18/2014 8:50 AM, Paolo Bonzini wrote:


I'm adding two people who might know.

Do you have any idea what the magic to pipe data back to the Linux
host should look like?  Does a normal serial port (COM1 for Windows,
/dev/ttyS0 for Linux) work?



The fine magic comes in three forms. Keystroke injection, context 
feedback, and exporting UI elements such as microphone level, 
recognition correction,  partial recognition pop-ups into the linux 
environment.


All of these have in common the magic trick of using the isolation of 
the Windows environment to provide a single dictation target to 
NaturallySpeaking. All of the information necessary for the above 
capabilities would pass through this target. initially, this would be an 
ssh session with command redirecting standard into whatever 
accessibility inputs available.


 The host side of this gateway would be responsible for all of the 
proper input redirection. In theory, it would even be possible to direct 
speech recognition towards two targets depending on the grammar. For 
example in the programming by speech environment I'm working on, I would 
dictate directly into the editor sometimes and into a secondary window 
for focused speech UI action. At no time, would my hand touch the mouse. 
:-) It will happen because of the context set by the speech UI as a 
deliberate effect of certain commands.


--- longer ramble about speech and nuance issues. ---

Being a crip who's trying to write code with speech, it's not going to 
be fast. once I get the Basic keystroke injection working, it will be 
good enough to continuing developing my program by speech environment. 
But to discuss that, would go down the rathole of current models of 
speech user interfaces, why don't work, things you shouldn't do such as 
speaking the keyboard, intentional automation, contextual grammars and a 
host of other things of spent the past 15 years learning about and 
figuring out how to make a change. By the way, that knowledge and 
passion is why I I've started a consulting practice that focuses on 
improving user experience/user interfaces starting from the intent of 
the user and perspective of a disabled person with the result being an 
improved UI for everybody.


The hardest part is going to be everything except a keystroke injection. 
This is because they require special knowledge that nuance is loath to 
give up. I don't get it. Nuance totes and gets federal benefits for 
producing something that is section 508 compliant yet, the only way 
you could be considered an accessibility tool is if you do nothing but 
write in Microsoft Word.  I worked for a dragon reseller for a while 
with medical record systems and, nuance doesn't even make an attempt to 
try and speech enable the medical record environment. They have people 
using a couple of solutions that don't work well and effectively provide 
no UI automation[1] tied into speech commands.


A bunch of us techno Crips have built environments that greatly enhance 
the range of solutions NaturallySpeaking could be used for but, nuance 
won't talk to us, won't give us any documentation to keep things running 
on our own, won't sell us the documentation either and worst of all, 
they have written terms into the AUP designed to bar extensions like our 
environment unless you buy the most expensive version of 
NaturallySpeaking available.


And did I mention that they have many bugs that are a significant 
problem for every user, not to mention the scripts and the last time I 
checked, it will cost about $10 to report a bug (support call cost) and 
then there's no guarantee they'll ever fix. In version 13, I'm seeing 
bugs that have been around since version 7 or 8.


I will do what I can to implement the magic and when I get stumped, 
then, I'll figure out what I'm going to do technically and politically.


--- eric

[1] This is kind of a lie. They have the tools to what you navigate 
blindly through an application (i.e. hit 15 tabs, two down arrows, and a 
mouse click and it might end up in the right UI element to do something. 
unfortunately, they do not have anything to make it predictable, 
repeatable or survive revisions in the user interface. But this is one 
of those rat holes I said it wouldn't go down.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: can I make this work… (Foundation for accessibility project)

2014-11-18 Thread Eric S. Johansson


On 11/18/2014 8:53 AM, Hans de Goede wrote:

kvm's usb pass-through should be able to handle this without any issues
(other then some latency), it uses special buffering for isochronous
usb packets, which should take care of usb audio working.

I've never tested audio recording, but audio playback and video recording
(webcams) work, so I expect audio recording to be fine.



That's great to know. I will spin up a version of Windows 7 and give it 
a try given that I'm not looking at it, I can strip it down to the 
barest user interface elements and improve performance significantly.


FYI, Windows 8 actually works better for hand crips like myself because 
it's easier to fit the big blocky icons with hand tremors and has flaky 
fine motion control them is to try and hit the tiny little icons and 
menus. Yes, that's why I like unity as well. :-) I would love to have a 
capability to make the title bar drop downs bigger temporarily  so I 
could more easily pick elements out of them but otherwise have been 
visually out-of-the-way.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


can I make this work… (Foundation for accessibility project)

2014-11-17 Thread Eric S. Johansson
this is a rather different use case than what you've been thinking of 
for KVM. It could mean significant improvement of the quality of life of 
disabled programs like myself. It's difficult to convey what it's like 
to try to use computers with speech recognition for something other than 
writing so, bear with me when I say something is real but don't quite 
prove it yet.  also, please take it as read that the only really usable 
speech recognition environment out there is NaturallySpeaking with 
Google close behind in terms of accuracy but not even in the same planet 
for ability to extend for speech enabled applications.


I'm trying to figure out ways of making it possible to drive Linux from 
Windows speech recognition (NaturallySpeaking).  The goal is a system 
where Windows runs in a virtual machine (Linux host), audio is passed 
through from a USB headset to the Windows environment. And the output of 
the recognition engine is piped through some magic back to the Linux host.


the hardest part of all of this without question is getting clean 
uninterrupted audio from the USB device all the way through to the 
Windows virtual machine. virtual box, VMware fail mostly in delivering  
reliable  audio to the virtual machine.


I expect KVM to not  work right  with regards to getting clean 
audio/real-time USB but I'm asking in case I'm wrong. if it doesn't work 
or can't work yet, what would it take to make it possible for clean 
audio to be passed through to a guest?


--- Why this is important, approaches that failed, why think this will 
work. Boring accessibility info ---


The history of trying to make Windows or DOS based speech recognition 
drive Linux has a long and tortured history. almost all of them involve 
some form of an open loop system that ignores system context and counts 
on the grammar to specify the context and the subsequent keystrokes 
injected into the target system.


This model fails because it effectively speaking keyboard functions 
which wastes the majority of the power of a good grammar in a speech 
recognition environment.


Most common configuration for speech recognition in a virtualized 
environment today is that Windows is the host with speech recognition 
and Linux is the guest. It's just a reimplementation of the open-loop 
system described above where your dictation results are keystrokes 
injected into the virtual machine console window. Sometimes works, 
sometimes drops characters.


One big failing of the Windows host/Linux guest environments is in 
addition to dropping characters,it seems to drop segments of the audio 
stream on the Windows side. It's  common but not frequent for this to 
happen anyway when running Windows with any sort of CPU utilization but 
it's almost guaranteed as soon as a virtual machine starts up.


Another failing is that the context the recognition application is aware 
of is the window of the console. It knows nothing about the internal 
context of the virtual machine (what application has focus). And 
unfortunately it can't know anything more because of the way that 
NaturallySpeaking uses the local Windows context.


Inverting the relationship between guest and host where Linux is the 
host and Windows is the guest solves at least the focus problem. In the 
virtual machine, you have a portal application the canal control the 
perception of context and tunnels the character stream from the 
recognition engine into the host OS to drive it open loop. The portal 
application[1] can also communicate which grammar sequence has been 
parsed and what action should be taken on the host site. At this point, 
we now have the capabilities of a closed-loop speech recognition 
environment where a grammar can read context to generate a new grammar 
to fit the applications state. This means smaller utterances which can 
be disambiguated versus the more traditional large utterance 
disambiguation technique.


A couple other advantages of Windows as a guest is that it only run 
speech recognition in the portal. There's no browsers, no flash, 
JavaScript, viruses and other stuff taking up resources and 
distracting from speech recognition working as well as possible. The 
downside is that the host running the virtual machine needs to make the 
VM very high almost real-time priority[2] so that it doesn't stall and 
speech recognition works as quickly and as accurately as possible.


Hope I didn't bore you too badly. Thank you for reading and I hope we 
can make this work.

--- eric



[1] should I call it cake?
[2]  I'm looking at you Firefox, sucking down 30% of the CPU doing nothing
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html