Re: one out of four existing kvm guest's not starting after system upgrade

2012-02-28 Thread Thomas Fjellstrom
On Tue Feb 28, 2012, you wrote:
 On 2012-02-19 21:13, Thomas Fjellstrom wrote:
  I'm pretty much stumped on this. So I decided to try re-creating the vm
  through virt-manager. Its up and running now. The only to major
  differences I can see in the old and new config is the machine (-M
  pc-0.12 vs -M pc-1.0) parameter, and the uuid. The rest of the
  parameters I played with a lot trying to get it to work by starting up
  the vm manually from the cli. I can't really see how those two changes
  would do much of anything considering the other three VM's still are
  configured to use -M pc-0.12, and they work fine.
 
 To pick up this topic again: The trace contains no clear indication what
 is going on. Now I'm trying to understand what works and what not.
 Please correct / extend as required:
 
  - qemu-kvm-0.12 problematic-vm.img   [OK]
  - qemu-kvm-1.0 -M pc-0.12 problematic-vm.img [HANG]
  - qemu-kvm-1.0 -M pc-1.0 problematic-vm.img  [OK]
 
 In all cases, the image is the same, never reinstalled?

Right. same exact disk image.

 
 BTW, what is your guest again? What is your VM configuration?

Guest is debian squeeze. (with a trace of sid, but not a whole lot)
 
 That IOCTL error messages you find in the kernel log likely relate to
 direct cdrom access from the qemu process. Do you pass a host drive
 through?

Not a CDROM drive no. The host doesn't even have a cdrom drive. There are some 
virtio lvm disk images passed through.
 
 Jan


-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: one out of four existing kvm guest's not starting after system upgrade

2012-02-19 Thread Thomas Fjellstrom
On Sat Feb 18, 2012, Thomas Fjellstrom wrote:
 On Sat Feb 18, 2012, Jan Kiszka wrote:
  On 2012-02-18 09:50, Thomas Fjellstrom wrote:
   On Sat Feb 18, 2012, Jan Kiszka wrote:
   On 2012-02-18 05:49, Thomas Fjellstrom wrote:
   I just updated my kvm host, kernel upgraded from 2.6.38 up to 3.2,
   and qemu+qemu-kvm updated (not sure from what to what). But after
   the upgrade, one of my guests will not start up. It gets stuck with
   60-80% cpu use, almost no memory is allocated by qemu/kvm, and no
   output of any kind is seen (in the host console, or a bunch of the
   guest output options like curses display, stdio output, virsh
   console, vnc or sdl output). I normally use libvirt to manage the
   guests, but I've attempted to run qemu manually, and have the same
   problems.
   
   What can cause this?
   
   Just tested booting back into the old kernel, the one guest still
   won't start, while the rest do. I'm thoroughly confused.
   
   You mean if you only update qemu-kvm, the problem persists, just with
   lower probability? In that case, we definitely need the version of
   your current qemu-kvm installation. Also, it would be nice to attach
   gdb to the stuck qemu-kvm process, issuing a thread apply all
   backtrace in that state.
   
   Jan
   
   Sorry I wasn't clear. If I just update qemu-kvm (And qemu with it, and
   not the kernel), it always just hangs on load.
   
   current version:
   QEMU emulator version 1.0 (qemu-kvm-1.0 Debian 1.0+dfsg-8), Copyright
   (c) 2003-2008 Fabrice Bellard
   
   gdb thread apply all bt:
   Thread 2 (Thread 0x7fe7b810c700 (LWP 14650)):
   #0  0x7fe7c0a11957 in ioctl () from /lib/x86_64-linux-gnu/libc.so.6
   #1  0x7fe7c53155e9 in kvm_vcpu_ioctl (env=optimized out,
   type=optimized out) at
   /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/kvm-
   all.c:1101
   #2  0x7fe7c5315731 in kvm_cpu_exec (env=0x7fe7c61cb350) at
   /build/buildd-
   qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/kvm-all.c:987 #3
   0x7fe7c52ecf31 in qemu_kvm_cpu_thread_fn (arg=0x7fe7c61cb350) at
   /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/cpus.c
   : 740 #4  0x7fe7c0ccdb50 in start_thread () from /lib/x86_64-linux-
   gnu/libpthread.so.0
   #5  0x7fe7c0a1890d in clone () from /lib/x86_64-linux-gnu/libc.so.6
   #6  0x in ?? ()
   
   Thread 1 (Thread 0x7fe7c5101900 (LWP 14648)):
   #0  0x7fe7c0a12403 in select () from
   /lib/x86_64-linux-gnu/libc.so.6 #1  0x7fe7c525b56c in
   main_loop_wait (nonblocking=optimized out) at
   /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/main-
   loop.c:456
   #2  0x7fe7c51a372f in main_loop () at
   /build/buildd-qemu-kvm_1.0+dfsg-8-
   amd64-ppNMqm/qemu-kvm-1.0+dfsg/vl.c:1482
   #3  main (argc=optimized out, argv=optimized out, envp=optimized
   out) at
   /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/vl.c:3
   5 23
   
   Thanks :)
  
  OK, then we need a kernel view on this. Can you try
  
  http://www.linux-kvm.org/page/Tracing
  
  ?
  
  Thanks,
  Jan
 
 It made a rather large trace file. I'm pretty sure no-one wants me to
 actually link (or attach) the full 1.5G file, but the last 5000 lines
 might be useful, so I've compressed and attached them. (but just in case,
 I'm lzma'ing the entire 1.5G trace data file)

I'm pretty much stumped on this. So I decided to try re-creating the vm 
through virt-manager. Its up and running now. The only to major differences I 
can see in the old and new config is the machine (-M pc-0.12 vs -M pc-1.0) 
parameter, and the uuid. The rest of the parameters I played with a lot trying 
to get it to work by starting up the vm manually from the cli. I can't really 
see how those two changes would do much of anything considering the other 
three VM's still are configured to use -M pc-0.12, and they work fine.

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: one out of four existing kvm guest's not starting after system upgrade

2012-02-18 Thread Thomas Fjellstrom
On Sat Feb 18, 2012, Jan Kiszka wrote:
 On 2012-02-18 05:49, Thomas Fjellstrom wrote:
  I just updated my kvm host, kernel upgraded from 2.6.38 up to 3.2, and
  qemu+qemu-kvm updated (not sure from what to what). But after the
  upgrade, one of my guests will not start up. It gets stuck with 60-80%
  cpu use, almost no memory is allocated by qemu/kvm, and no output of any
  kind is seen (in the host console, or a bunch of the guest output
  options like curses display, stdio output, virsh console, vnc or sdl
  output). I normally use libvirt to manage the guests, but I've attempted
  to run qemu manually, and have the same problems.
  
  What can cause this?
  
  Just tested booting back into the old kernel, the one guest still won't
  start, while the rest do. I'm thoroughly confused.
 
 You mean if you only update qemu-kvm, the problem persists, just with
 lower probability? In that case, we definitely need the version of your
 current qemu-kvm installation. Also, it would be nice to attach gdb to
 the stuck qemu-kvm process, issuing a thread apply all backtrace in
 that state.
 
 Jan

Sorry I wasn't clear. If I just update qemu-kvm (And qemu with it, and not the 
kernel), it always just hangs on load. 

current version:
QEMU emulator version 1.0 (qemu-kvm-1.0 Debian 1.0+dfsg-8), Copyright (c) 
2003-2008 Fabrice Bellard

gdb thread apply all bt:
Thread 2 (Thread 0x7fe7b810c700 (LWP 14650)):
#0  0x7fe7c0a11957 in ioctl () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fe7c53155e9 in kvm_vcpu_ioctl (env=optimized out, type=optimized 
out) at /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/kvm-
all.c:1101
#2  0x7fe7c5315731 in kvm_cpu_exec (env=0x7fe7c61cb350) at /build/buildd-
qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/kvm-all.c:987
#3  0x7fe7c52ecf31 in qemu_kvm_cpu_thread_fn (arg=0x7fe7c61cb350) at 
/build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/cpus.c:740
#4  0x7fe7c0ccdb50 in start_thread () from /lib/x86_64-linux-
gnu/libpthread.so.0
#5  0x7fe7c0a1890d in clone () from /lib/x86_64-linux-gnu/libc.so.6
#6  0x in ?? ()

Thread 1 (Thread 0x7fe7c5101900 (LWP 14648)):
#0  0x7fe7c0a12403 in select () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fe7c525b56c in main_loop_wait (nonblocking=optimized out) at 
/build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/main-
loop.c:456
#2  0x7fe7c51a372f in main_loop () at /build/buildd-qemu-kvm_1.0+dfsg-8-
amd64-ppNMqm/qemu-kvm-1.0+dfsg/vl.c:1482
#3  main (argc=optimized out, argv=optimized out, envp=optimized out) at 
/build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/vl.c:3523

Thanks :)

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: one out of four existing kvm guest's not starting after system upgrade

2012-02-18 Thread Thomas Fjellstrom
On Sat Feb 18, 2012, Thomas Fjellstrom wrote:
 On Sat Feb 18, 2012, Jan Kiszka wrote:
  On 2012-02-18 05:49, Thomas Fjellstrom wrote:
   I just updated my kvm host, kernel upgraded from 2.6.38 up to 3.2, and
   qemu+qemu-kvm updated (not sure from what to what). But after the
   upgrade, one of my guests will not start up. It gets stuck with 60-80%
   cpu use, almost no memory is allocated by qemu/kvm, and no output of
   any kind is seen (in the host console, or a bunch of the guest output
   options like curses display, stdio output, virsh console, vnc or sdl
   output). I normally use libvirt to manage the guests, but I've
   attempted to run qemu manually, and have the same problems.
   
   What can cause this?
   
   Just tested booting back into the old kernel, the one guest still won't
   start, while the rest do. I'm thoroughly confused.
  
  You mean if you only update qemu-kvm, the problem persists, just with
  lower probability? In that case, we definitely need the version of your
  current qemu-kvm installation. Also, it would be nice to attach gdb to
  the stuck qemu-kvm process, issuing a thread apply all backtrace in
  that state.
  
  Jan
 
 Sorry I wasn't clear. If I just update qemu-kvm (And qemu with it, and not
 the kernel), it always just hangs on load.

where it is the one vm I mentioned before. Three others are just fine. The 
stranger bit, is I'm pretty sure I created them all at the same time, in very 
similar ways. At the very least the libvirt config for them all are near 
identical.

[snip]

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: one out of four existing kvm guest's not starting after system upgrade

2012-02-18 Thread Thomas Fjellstrom
On Sat Feb 18, 2012, Jan Kiszka wrote:
 On 2012-02-18 09:50, Thomas Fjellstrom wrote:
  On Sat Feb 18, 2012, Jan Kiszka wrote:
  On 2012-02-18 05:49, Thomas Fjellstrom wrote:
  I just updated my kvm host, kernel upgraded from 2.6.38 up to 3.2, and
  qemu+qemu-kvm updated (not sure from what to what). But after the
  upgrade, one of my guests will not start up. It gets stuck with 60-80%
  cpu use, almost no memory is allocated by qemu/kvm, and no output of
  any kind is seen (in the host console, or a bunch of the guest output
  options like curses display, stdio output, virsh console, vnc or sdl
  output). I normally use libvirt to manage the guests, but I've
  attempted to run qemu manually, and have the same problems.
  
  What can cause this?
  
  Just tested booting back into the old kernel, the one guest still won't
  start, while the rest do. I'm thoroughly confused.
  
  You mean if you only update qemu-kvm, the problem persists, just with
  lower probability? In that case, we definitely need the version of your
  current qemu-kvm installation. Also, it would be nice to attach gdb to
  the stuck qemu-kvm process, issuing a thread apply all backtrace in
  that state.
  
  Jan
  
  Sorry I wasn't clear. If I just update qemu-kvm (And qemu with it, and
  not the kernel), it always just hangs on load.
  
  current version:
  QEMU emulator version 1.0 (qemu-kvm-1.0 Debian 1.0+dfsg-8), Copyright (c)
  2003-2008 Fabrice Bellard
  
  gdb thread apply all bt:
  Thread 2 (Thread 0x7fe7b810c700 (LWP 14650)):
  #0  0x7fe7c0a11957 in ioctl () from /lib/x86_64-linux-gnu/libc.so.6
  #1  0x7fe7c53155e9 in kvm_vcpu_ioctl (env=optimized out,
  type=optimized out) at
  /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/kvm-
  all.c:1101
  #2  0x7fe7c5315731 in kvm_cpu_exec (env=0x7fe7c61cb350) at
  /build/buildd-
  qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/kvm-all.c:987 #3 
  0x7fe7c52ecf31 in qemu_kvm_cpu_thread_fn (arg=0x7fe7c61cb350) at
  /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/cpus.c:
  740 #4  0x7fe7c0ccdb50 in start_thread () from /lib/x86_64-linux-
  gnu/libpthread.so.0
  #5  0x7fe7c0a1890d in clone () from /lib/x86_64-linux-gnu/libc.so.6
  #6  0x in ?? ()
  
  Thread 1 (Thread 0x7fe7c5101900 (LWP 14648)):
  #0  0x7fe7c0a12403 in select () from /lib/x86_64-linux-gnu/libc.so.6
  #1  0x7fe7c525b56c in main_loop_wait (nonblocking=optimized out) at
  /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/main-
  loop.c:456
  #2  0x7fe7c51a372f in main_loop () at
  /build/buildd-qemu-kvm_1.0+dfsg-8-
  amd64-ppNMqm/qemu-kvm-1.0+dfsg/vl.c:1482
  #3  main (argc=optimized out, argv=optimized out, envp=optimized
  out) at
  /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/vl.c:35
  23
  
  Thanks :)
 
 OK, then we need a kernel view on this. Can you try
 
 http://www.linux-kvm.org/page/Tracing
 
 ?
 
 Thanks,
 Jan

Before I get done with that, does it help that I see some dmesg warnings (with 
the 3.2 kernel) relating to some ioctls? 

[   26.694981] kvm: sending ioctl 80200204 to a partition!
[   26.695036] kvm: sending ioctl 5326 to a partition!

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: one out of four existing kvm guest's not starting after system upgrade

2012-02-18 Thread Thomas Fjellstrom
On Sat Feb 18, 2012, Jan Kiszka wrote:
 On 2012-02-18 09:50, Thomas Fjellstrom wrote:
  On Sat Feb 18, 2012, Jan Kiszka wrote:
  On 2012-02-18 05:49, Thomas Fjellstrom wrote:
  I just updated my kvm host, kernel upgraded from 2.6.38 up to 3.2, and
  qemu+qemu-kvm updated (not sure from what to what). But after the
  upgrade, one of my guests will not start up. It gets stuck with 60-80%
  cpu use, almost no memory is allocated by qemu/kvm, and no output of
  any kind is seen (in the host console, or a bunch of the guest output
  options like curses display, stdio output, virsh console, vnc or sdl
  output). I normally use libvirt to manage the guests, but I've
  attempted to run qemu manually, and have the same problems.
  
  What can cause this?
  
  Just tested booting back into the old kernel, the one guest still won't
  start, while the rest do. I'm thoroughly confused.
  
  You mean if you only update qemu-kvm, the problem persists, just with
  lower probability? In that case, we definitely need the version of your
  current qemu-kvm installation. Also, it would be nice to attach gdb to
  the stuck qemu-kvm process, issuing a thread apply all backtrace in
  that state.
  
  Jan
  
  Sorry I wasn't clear. If I just update qemu-kvm (And qemu with it, and
  not the kernel), it always just hangs on load.
  
  current version:
  QEMU emulator version 1.0 (qemu-kvm-1.0 Debian 1.0+dfsg-8), Copyright (c)
  2003-2008 Fabrice Bellard
  
  gdb thread apply all bt:
  Thread 2 (Thread 0x7fe7b810c700 (LWP 14650)):
  #0  0x7fe7c0a11957 in ioctl () from /lib/x86_64-linux-gnu/libc.so.6
  #1  0x7fe7c53155e9 in kvm_vcpu_ioctl (env=optimized out,
  type=optimized out) at
  /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/kvm-
  all.c:1101
  #2  0x7fe7c5315731 in kvm_cpu_exec (env=0x7fe7c61cb350) at
  /build/buildd-
  qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/kvm-all.c:987 #3 
  0x7fe7c52ecf31 in qemu_kvm_cpu_thread_fn (arg=0x7fe7c61cb350) at
  /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/cpus.c:
  740 #4  0x7fe7c0ccdb50 in start_thread () from /lib/x86_64-linux-
  gnu/libpthread.so.0
  #5  0x7fe7c0a1890d in clone () from /lib/x86_64-linux-gnu/libc.so.6
  #6  0x in ?? ()
  
  Thread 1 (Thread 0x7fe7c5101900 (LWP 14648)):
  #0  0x7fe7c0a12403 in select () from /lib/x86_64-linux-gnu/libc.so.6
  #1  0x7fe7c525b56c in main_loop_wait (nonblocking=optimized out) at
  /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/main-
  loop.c:456
  #2  0x7fe7c51a372f in main_loop () at
  /build/buildd-qemu-kvm_1.0+dfsg-8-
  amd64-ppNMqm/qemu-kvm-1.0+dfsg/vl.c:1482
  #3  main (argc=optimized out, argv=optimized out, envp=optimized
  out) at
  /build/buildd-qemu-kvm_1.0+dfsg-8-amd64-ppNMqm/qemu-kvm-1.0+dfsg/vl.c:35
  23
  
  Thanks :)
 
 OK, then we need a kernel view on this. Can you try
 
 http://www.linux-kvm.org/page/Tracing
 
 ?
 
 Thanks,
 Jan

It made a rather large trace file. I'm pretty sure no-one wants me to actually 
link (or attach) the full 1.5G file, but the last 5000 lines might be useful, 
so I've compressed and attached them. (but just in case, I'm lzma'ing the 
entire 1.5G trace data file)

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca


trace-cmd.report1-tail.lzma
Description: application/lzma


one out of four existing kvm guest's not starting after system upgrade

2012-02-17 Thread Thomas Fjellstrom
I just updated my kvm host, kernel upgraded from 2.6.38 up to 3.2, and 
qemu+qemu-kvm updated (not sure from what to what). But after the upgrade, one 
of my guests will not start up. It gets stuck with 60-80% cpu use, almost no 
memory is allocated by qemu/kvm, and no output of any kind is seen (in the 
host console, or a bunch of the guest output options like curses display, 
stdio output, virsh console, vnc or sdl output). I normally use libvirt to 
manage the guests, but I've attempted to run qemu manually, and have the same 
problems.

What can cause this? 

Just tested booting back into the old kernel, the one guest still won't start, 
while the rest do. I'm thoroughly confused.

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How many threads should a kvm vm be starting?

2011-09-28 Thread Thomas Fjellstrom
On September 28, 2011, Daniel P. Berrange wrote:
 On Tue, Sep 27, 2011 at 04:04:41PM -0600, Thomas Fjellstrom wrote:
  On September 27, 2011, Avi Kivity wrote:
   On 09/27/2011 03:29 AM, Thomas Fjellstrom wrote:
I just noticed something interesting, a virtual machine on one of my
servers seems to have 69 threads (including the main thread). Other
guests on the machine only have a couple threads.

Is this normal? or has something gone horribly wrong?
   
   It's normal if the guest does a lot of I/O.  The thread count should go
   down when the guest idles.
  
  Ah, that would make sense. Though it kind of defeats assigning a vm a
  single cpu/core. A single VM can now DOS an entire multi-core-cpu
  server. It pretty much pegged my dual core (with HT) server for a couple
  hours.
 
 You can mitigate these problems by putting each KVM process in its own
 cgroup, and using the 'cpu_shares' tunable to ensure that each KVM
 process gets the same relative ratio of CPU time, regardless of how
 many threads it is running. With newer kernels there are other CPU
 tunables for placing hard caps on CPU utilization of the process as
 a whole too.

I'll have to look into how to set that up with libvirt. A brief search leads 
me to believe its rather easy to set up, so I'll have to do that asap :)

 Regards,
 Daniel


-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How many threads should a kvm vm be starting?

2011-09-27 Thread Thomas Fjellstrom
On September 27, 2011, Avi Kivity wrote:
 On 09/27/2011 03:29 AM, Thomas Fjellstrom wrote:
  I just noticed something interesting, a virtual machine on one of my
  servers seems to have 69 threads (including the main thread). Other
  guests on the machine only have a couple threads.
  
  Is this normal? or has something gone horribly wrong?
 
 It's normal if the guest does a lot of I/O.  The thread count should go
 down when the guest idles.

Ah, that would make sense. Though it kind of defeats assigning a vm a single 
cpu/core. A single VM can now DOS an entire multi-core-cpu server. It pretty 
much pegged my dual core (with HT) server for a couple hours.

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


order 1 page allocation failures

2011-09-27 Thread Thomas Fjellstrom
:0
[362409.430778]  free:31113 slab_reclaimable:36977 slab_unreclaimable:11009
[362409.430783]  mapped:11738 shmem:226 pagetables:9104 bounce:0
[362409.430791] Node 0 DMA free:15912kB min:128kB low:160kB high:192kB 
active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB 
unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15688kB 
mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB 
slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB 
writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[362409.430827] lowmem_reserve[]: 0 3254 8051 8051
[362409.430837] Node 0 DMA32 free:57588kB min:27260kB low:34072kB high:40888kB 
active_anon:454620kB inactive_anon:92920kB active_file:1279680kB 
inactive_file:1348708kB unevictable:0kB isolated(anon):0kB isolated(file):0kB 
present:3332192kB mlocked:0kB dirty:5928kB writeback:0kB mapped:17404kB 
shmem:572kB slab_reclaimable:71792kB slab_unreclaimable:5488kB 
kernel_stack:296kB pagetables:2420kB unstable:0kB bounce:0kB writeback_tmp:0kB 
pages_scanned:0 all_unreclaimable? no
[362409.430877] lowmem_reserve[]: 0 0 4797 4797
[362409.430886] Node 0 Normal free:51448kB min:40192kB low:50240kB high:60288kB 
active_anon:1852060kB inactive_anon:348928kB active_file:1183384kB 
inactive_file:1204152kB unevictable:0kB isolated(anon):0kB isolated(file):0kB 
present:4912640kB mlocked:0kB dirty:6884kB writeback:0kB mapped:29548kB 
shmem:332kB slab_reclaimable:76116kB slab_unreclaimable:38548kB 
kernel_stack:3184kB pagetables:33996kB unstable:0kB bounce:0kB 
writeback_tmp:0kB pages_scanned:9 all_unreclaimable? no
[362409.430926] lowmem_reserve[]: 0 0 0 0
[362409.430935] Node 0 DMA: 0*4kB 1*8kB 0*16kB 1*32kB 2*64kB 1*128kB 1*256kB 
0*512kB 1*1024kB 1*2048kB 3*4096kB = 15912kB
[362409.430959] Node 0 DMA32: 13011*4kB 198*8kB 7*16kB 0*32kB 0*64kB 0*128kB 
0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 57836kB
[362409.430983] Node 0 Normal: 11740*4kB 49*8kB 0*16kB 0*32kB 0*64kB 0*128kB 
0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 51448kB
[362409.431007] 1269832 total pagecache pages
[362409.431012] 15734 pages in swap cache
[362409.431018] Swap cache stats: add 129275, delete 113541, find 78987/83931
[362409.431025] Free swap  = 7546528kB
[362409.431030] Total swap = 7811068kB
[362409.433896] 2097136 pages RAM
[362409.433896] 47949 pages reserved
[362409.433896] 681769 pages shared
[362409.433896] 1367619 pages non-shared

The server has 8G of ram, and usually never uses more than about 4G (sitting at 
3.4G right now).

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


How many threads should a kvm vm be starting?

2011-09-26 Thread Thomas Fjellstrom
I just noticed something interesting, a virtual machine on one of my servers 
seems to have 69 threads (including the main thread). Other guests on the 
machine only have a couple threads.

Is this normal? or has something gone horribly wrong?

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm linux guest hanging for minutes at a time

2011-08-11 Thread Thomas Fjellstrom
On August 11, 2011, Avi Kivity wrote:
 On 08/09/2011 06:33 PM, Nick wrote:
  Hi,
  
  Just joined this list, looking for leads to solve a similar-sounding
  problem (guest processes hanging for seconds or minutes when host IO
  load is high). I'll say more in a separate email, but I caught the end
  of this thread and wanted to ask about kvm-clock.
  
  Naively I'd have thought that using the wrong clock would not actually
  *cause* hangs like this. Or is that what you're implying?
 
 Using the wrong clock easily causes hangs.  The system schedules a
 wakeup in 3 ms, wrong clock causes it to wakeup in 3 years, you get a
 hang for (3 years - 3 ms).

I am wondering though why the system time would be used to schedule wakeups 
when something a little more like the posix CLOCK_MONOTONIC would make more 
sense. You (at least I don't think you do) really don't want host clock 
changes to interfere with a guest so much that it sleeps forever.

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm linux guest hanging for minutes at a time

2011-08-09 Thread Thomas Fjellstrom
On August 9, 2011, Avi Kivity wrote:
 On 08/07/2011 05:06 PM, Thomas Fjellstrom wrote:
  Occasionally when there's heavy cpu and/or io load, a kvm guest will lock
  up for minutes at a time, last occurrence was for about 12 minutes or
  so, and the guest itself reported:
  
  [1992982.639514] Clocksource tsc unstable (delta = -747307707123 ns)
  
  in dmesg after it came back. The only other hint as to what is going on
  is that the irq count for local timer requests, virtio-input and
  virtio- requests spikes rather high. Also one of the cpu cores on the
  host was pegged the entire time.
  
  The last thing to cause a hang was an aptitude upgrade in the guest,
  which was a bit behind, so it had to update over 300 packages.
  
  The host is running 2.6.38-1-amd64 (2.6.38+32) from debian, qemu-kvm
  0.14.0, and the guest was running 2.6.38-2-amd64 (not sure on the +
  number).
  
  Is this a known problem, thats hopefully fixed in newer kernels and
  qemu/kvm packages?
 
 Your guest isn't using kvmclock for some reason.  Is it compiled in the
 guest kernel?  What are the contents of
 /sys/devices/system/clocksource/clocksource0/available_clocksource and
 /sys/devices/system/clocksource/clocksource0/current_clocksource (in the
 guest filesystem)?

Hi, it seems it is using kvm-clock:

$ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
kvm-clock hpet acpi_pm
$ cat /sys/devices/system/clocksource/clocksource0/current_clocksource
kvm-clock

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm linux guest hanging for minutes at a time

2011-08-09 Thread Thomas Fjellstrom
On August 9, 2011, Avi Kivity wrote:
 On 08/09/2011 03:03 PM, Thomas Fjellstrom wrote:
Your guest isn't using kvmclock for some reason.  Is it compiled in
the guest kernel?  What are the contents of
/sys/devices/system/clocksource/clocksource0/available_clocksource and
/sys/devices/system/clocksource/clocksource0/current_clocksource (in
the guest filesystem)?
  
  Hi, it seems it is using kvm-clock:
  
  $ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
  kvm-clock hpet acpi_pm
  $ cat /sys/devices/system/clocksource/clocksource0/current_clocksource
  kvm-clock
 
 Yikes.  Please trace such a hang according to
 http://www.linux-kvm.org/page/Tracing.

Does it matter that I have several vms running? Is there a way to limit it to 
tracing the single kvm process that's been locking up?

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm linux guest hanging for minutes at a time

2011-08-09 Thread Thomas Fjellstrom
On August 9, 2011, Avi Kivity wrote:
 On 08/09/2011 05:31 PM, Thomas Fjellstrom wrote:
  On August 9, 2011, Avi Kivity wrote:
On 08/09/2011 03:03 PM, Thomas Fjellstrom wrote:
 Your guest isn't using kvmclock for some reason.  Is it
 compiled in the guest kernel?  What are the contents of
 /sys/devices/system/clocksource/clocksource0/available_clocksou
 rce and
 /sys/devices/system/clocksource/clocksource0/current_clocksour
 ce (in the guest filesystem)?
  
  Hi, it seems it is using kvm-clock:
  
  $ cat
  /sys/devices/system/clocksource/clocksource0/available_clocksource
  kvm-clock hpet acpi_pm
  $ cat
  /sys/devices/system/clocksource/clocksource0/current_clocksource
  kvm-clock

Yikes.  Please trace such a hang according to
http://www.linux-kvm.org/page/Tracing.
  
  Does it matter that I have several vms running? Is there a way to limit
  it to tracing the single kvm process that's been locking up?
 
 You can use trace-cmd record -F ... qemu ... but that misses out on
 events the run from workqueues.
 
 Best to stop those other guests.

I would prefer not to do that, those other guests are my web server, mail 
server, and database server. I have no idea if I can reproduce the problem in 
a reasonable time frame.

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm linux guest hanging for minutes at a time

2011-08-09 Thread Thomas Fjellstrom
On August 9, 2011, Avi Kivity wrote:
 On 08/09/2011 05:46 PM, Thomas Fjellstrom wrote:
  Does it matter that I have several vms running? Is there a way to
  limit it to tracing the single kvm process that's been locking up?

You can use trace-cmd record -F ... qemu ... but that misses out on
events the run from workqueues.

Best to stop those other guests.
  
  I would prefer not to do that, those other guests are my web server, mail
  server, and database server. I have no idea if I can reproduce the
  problem in a reasonable time frame.
 
 Okay then, please use -F.
 
 Note, please be sure to note the time the guest hangs so we can
 correlate it with the trace.

The fun part is the last thing to cause a hang was a 'aptitude dist-upgrade', 
which updated the kernel, and removed the running kernel, so if it has to be 
restarted, I won't be able to run again with the same kernel, unless I can find 
the package for the old kernel some place.

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm linux guest hanging for minutes at a time

2011-08-09 Thread Thomas Fjellstrom
On August 9, 2011, Avi Kivity wrote:
 On 08/09/2011 05:46 PM, Thomas Fjellstrom wrote:
  Does it matter that I have several vms running? Is there a way to
  limit it to tracing the single kvm process that's been locking up?

You can use trace-cmd record -F ... qemu ... but that misses out on
events the run from workqueues.

Best to stop those other guests.
  
  I would prefer not to do that, those other guests are my web server, mail
  server, and database server. I have no idea if I can reproduce the
  problem in a reasonable time frame.
 
 Okay then, please use -F.
 
 Note, please be sure to note the time the guest hangs so we can
 correlate it with the trace.

Probably a stupid question, but what is the full syntax for the command? I 
only have kvm processes, and qemu is set to give the threads qemu:instance-
name type names.

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


kvm linux guest hanging for minutes at a time

2011-08-07 Thread Thomas Fjellstrom
Occasionally when there's heavy cpu and/or io load, a kvm guest will lock up 
for minutes at a time, last occurrence was for about 12 minutes or so, and the 
guest itself reported:

[1992982.639514] Clocksource tsc unstable (delta = -747307707123 ns)

in dmesg after it came back. The only other hint as to what is going on is 
that the irq count for local timer requests, virtio-input and virtio-
requests spikes rather high. Also one of the cpu cores on the host was pegged  
the entire time.

The last thing to cause a hang was an aptitude upgrade in the guest, which 
was a bit behind, so it had to update over 300 packages.

The host is running 2.6.38-1-amd64 (2.6.38+32) from debian, qemu-kvm 0.14.0, 
and the guest was running 2.6.38-2-amd64 (not sure on the + number).

Is this a known problem, thats hopefully fixed in newer kernels and qemu/kvm 
packages?

Thanks

-- 
Thomas Fjellstrom
tho...@fjellstrom.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Advice for Router Guest

2010-01-12 Thread Thomas Fjellstrom
On Tue January 12 2010, Aaron Clausen wrote:
 I'm looking at moving from the router I'm running currently (a Linux
 box) and moving it into a KVM guest.  What are the recommendations for
 the networking of the external interface?  Should I just pass the NIC
 card through via PCI passthrough or is there a recommended way?
 

If your board dosn't have VT-d or AMD IOMMU, you can't use passthrough 
afaik. Or at the very least, you don't want to. I'm not sure if modern nics 
will even run without DMA.

If you're stuck without passthrough, you'll have to do what I did, and setup 
some bridges under the host, and use bridged tun/tap networking with the 
guests.

Performance is decent, but I've found my firewall guest will peg its cpu if 
my WAN load is too high, of course my firewall is running pfSense, and has 
to stick with an emulated nic, so that's probably why the cpu load is so 
high.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: xorg running very slow problem

2010-01-09 Thread Thomas Fjellstrom
On Sat January 9 2010, John Wong wrote:
 When i use qemu-kvm-87 or qemu-kvm-0.12.1.2 or snapshot (download from
 http://git.kernel.org/?p=virt/kvm/qemu-kvm.git;a=summary),
 xorg will running very slow (the screen refresh line by line), and i
 noticed the program Xorg will running very high cpu(99% in top).
 
 If i use qemu-kvm-86, xorg does not have this problem, and run just fine.
 
 The modules is come from debian-2.6.32 kvm-intel.
 
 The host is debian/x64/2.6.32.
 
 I use this command to start kvm:
 
 ${KVM} -drive
 file=${IMG},index=0,if=virtio,boot=on,cache=none,format=qcow2 \
 -net nic,vlan=${NUM},model=virtio,macaddr=00:AA:BB:CC:DD:0${NUM} \
 -net
 tap,vlan=${NUM},ifname=${iface},script=/etc/kvm/kvm-ifup,downscript=/et
 c/kvm/kvm-ifdown \
 -localtime -smp 2 \
 -soundhw all \
 -usb -usbdevice tablet \
 -k en-us -monitor stdio \
 -mem-path /hugepages \
 -vga std -sdl \
 -m 2048 -boot c
 
 Anyone know how to solve this problem?
 
 Please help, thank you.

What happens if you try the vmvga video adapter instead? I used to have 
issues with even the console being horrendously slow, and switching to vmvga 
improved performance.

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: xorg running very slow problem

2010-01-09 Thread Thomas Fjellstrom
On Sat January 9 2010, John Wong wrote:
 Thomas Fjellstrom 提到:
  What happens if you try the vmvga video adapter instead? I used to have
  issues with even the console being horrendously slow, and switching to
  vmvga improved performance.
 
 Sorry, what is vmvga adapter?
 
 How to switch it?
 in guest OS/xorg or on host kvm command?

both, if you have an existing xorg.conf file in the guest.

in qemu you'd change '-vga std' to '-vga vmware' and change the xorg.conf in 
your guest (only if there is one with a driver specified) to use the vmware 
driver.

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Very bad Speed with Virtio-net

2010-01-08 Thread Thomas Fjellstrom
On Thu January 7 2010, Riccardo Veraldi wrote:
 I have similar results, like yours, using CentOS 5.4 x86_64
 I do not think it is possible to gain more than this right now... or
 better I wish it could be possible
 
 If you can get better result please let me know

I get 600-800Mbits/s via virtio, the actual speed depends on the direction 
of traffic.. And if I setup the guest as the iperf server, and the host as 
the client, I get upwards of 1.2Gbits/s.

With some tweaking it might improve throughput, but might harm latency and 
such, and none of my guests need anywhere near that kind of throughput, but 
do appreciate lower latency, so I'm keeping it as it is :)

 Rick
 
 Benjamin Schweikert wrote:
  Hello everybody,
  this is my first post on a mailing list, so i hope everything works
  fine.
 
  My host is a AMD X2 4850e with a 64bit Gentoo (unstable). I have
  tested qemu-kvm 0.11, 0.12.x and the git version from the 6. jan.
  I created my own bridges, so i dont need the option from libvirt. I
  bridged a 1 Gb lan card for my VMs. When I use the virtio net driver,
  i get something about 200-300 mbit form my desktop to one if my VMs.
  If iI use the e1000 driver instead of the virtio I get about
  500 - 600 mbit.
  I tested this with the following kernels:
  Host: 2.6.31.6, 2.6.32.1, 2.6.32.2
  Guests: 2.6.26, 2.6.30, 2.6.32 (debian)
  2.6.32 (gentoo)
 
  Here is a default result, virtio vs. e1000:
 
  iperf -c 192.168.0.3 -w 512k -l 512k
  
  Client connecting to 192.168.0.3, TCP port 5001
  TCP window size:   256 KByte (WARNING: requested   512 KByte)
  
  [  3] local 192.168.0.2 port 52968 connected with 192.168.0.3 port 5001
  [ ID] Interval   Transfer Bandwidth
  [  3]  0.0-10.0 sec438 MBytes267 Mbits/sec
 
 
  iperf -c 192.168.0.3 -w 512k -l 512k
  
  Client connecting to 192.168.0.3, TCP port 5001
  TCP window size:   256 KByte (WARNING: requested   512 KByte)
  
  [  3] local 192.168.0.2 port 52995 connected with 192.168.0.3 port 5001
  [ ID] Interval   Transfer Bandwidth
  [  3]  0.0-10.0 sec602 MBytes505 Mbits/sec
 
  Any ideas what this could be? I attach a dmesg output of my host.
  Thx.
 
  Ben
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory usage with qemu-kvm-0.12.1.1

2010-01-04 Thread Thomas Fjellstrom
On Sun January 3 2010, Thomas Fjellstrom wrote:
 On Sun December 27 2009, Avi Kivity wrote:
  On 12/27/2009 07:00 PM, Daniel Bareiro wrote:
   Also, qemu might be leaking memory.  Please post 'pmap $pid' for all
   of your guests (do that before any of the other tests, on your
   swapped-out system).
  
   -
  
 total   626376K
  
 total   626472K
  
 total   626396K
  
 total   635292K
  
 total   625388K
 
  These all seem sane.  So it's a swap regression, hopefully
  2.6.32.something will have a fix.
 
 Sorry to butt in, but heres something I've found odd:
 
 # ps aux | grep /usr/bin/kvm | grep -v grep | cut -f6 -d' ' | xargs -n 1
  -i{} pmap {} | grep total total   845928K
  total   450336K
  total   441968K
  total   440740K
  total   845848K
  total   465808K
 
 root 10466  2.6  6.2 845924 253804 ?   Sl2009 2084:29
  /usr/bin/kvm -S -M pc -m 512 -smp 1 -name awiki -uuid
  330abdce-f657-e0e2-196b-5bf22c0e76f0 -monitor
  unix:/var/lib/libvirt/qemu/awiki.monitor,server,nowait -boot c -drive
  file=/dev/vg0/awiki-root,if=virtio,index=0,boot=on -drive
  file=/dev/vg0/awiki-swap,if=virtio,index=1 -drive
  file=/mnt/boris/data/pub/diskimage/debian-503-amd64-netinst.iso,if=ide,m
 edia=cdrom,index=2,format= -net
  nic,macaddr=52:54:00:35:8b:fb,vlan=0,model=virtio,name=virtio.0 -net
  tap,fd=19,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc
  127.0.0.1:2 -k en-us -vga vmware root 13953  0.2  1.3 450332 54832 ?
 Sl2009 167:25 /usr/bin/kvm -S -M pc -m 128 -smp 1 -name nginx
  -uuid 793160c1-5800-72cf-7b66-8484f931d396 -monitor
  unix:/var/lib/libvirt/qemu/nginx.monitor,server,nowait -boot c -drive
  file=/dev/vg0/nginx,if=virtio,index=0,boot=on -net
  nic,macaddr=52:54:00:06:49:d5,vlan=0,model=virtio,name=virtio.0 -net
  tap,fd=21,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc
  127.0.0.1:3 -k en-us -vga vmware root 14051 31.4  6.7 441964 273132
  ?   Rl   01:19  30:35 /usr/bin/kvm -S -M pc -m 256 -smp 1 -name
  pfsense -uuid 0af4dfac-70f1-c348-9ce5-0df18e9bdc2c -monitor
  unix:/var/lib/libvirt/qemu/pfsense.monitor,server,nowait -boot c -drive
  file=/dev/vg0/pfsense,if=ide,index=0,boot=on -net
  nic,macaddr=00:19:5b:86:3e:fb,vlan=0,model=e1000,name=e1000.0 -net
  tap,fd=22,vlan=0,name=tap.0 -net
  nic,macaddr=52:54:00:53:62:b9,vlan=1,model=e1000,name=e1000.1 -net
  tap,fd=28,vlan=1,name=tap.1 -serial pty -parallel none -usb -vnc
  0.0.0.0:0 -k en-us -vga vmware root 15528 19.7  6.6 440736 270484 ? 
   Sl   01:37  15:38 /usr/bin/kvm -S -M pc -m 256 -smp 1 -name
  pfsense2 -uuid 2c4000a0-7565-b12d-1e2a-1e77cdb778d3 -monitor
  unix:/var/lib/libvirt/qemu/pfsense2.monitor,server,nowait -boot c -drive
  file=/dev/vg0/pfsense2,if=ide,index=0,boot=on -drive
  file=/mnt/boris/data/pub/diskimage/pfSense-1.2.2-LiveCD-Installer.iso,if
 =ide,media=cdrom,index=2,format= -net
  nic,macaddr=52:54:00:38:fc:a7,vlan=0,model=e1000,name=e1000.0 -net
  tap,fd=28,vlan=0,name=tap.0 -net
  nic,macaddr=00:24:1d:18:f8:f6,vlan=1,model=e1000,name=e1000.1 -net
  tap,fd=29,vlan=1,name=tap.1 -serial pty -parallel none -usb -vnc
  127.0.0.1:1 -k en-us -vga vmware root 27079  0.9  0.7 845700 30768 ?
 SLl   2009 584:28 /usr/bin/kvm -S -M pc -m 512 -smp 1 -name
  asterisk -uuid a87d8fc1-ea90-0db4-d6fe-c04e8f2175e7 -monitor
  unix:/var/lib/libvirt/qemu/asterisk.monitor,server,nowait -boot c -drive
  file=/dev/vg0/asterisk,if=virtio,index=0,boot=on -net
  nic,macaddr=52:54:00:68:db:fc,vlan=0,model=virtio,name=virtio.0 -net
  tap,fd=23,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc
  127.0.0.1:5 -k en-us -vga vmware -soundhw es1370 root 31214  0.6 
  2.9 465804 121476 ?   Sl2009 207:08 /usr/bin/kvm -S -M pc -m 256
  -smp 1 -name svn -uuid 6e30e0be-1781-7a68-fa5d-d3c69787e705 -monitor
  unix:/var/lib/libvirt/qemu/svn.monitor,server,nowait -boot c -drive
  file=/dev/vg0/svn-root,if=virtio,index=0,boot=on -net
  nic,macaddr=52:54:00:7d:f4:0b,vlan=0,model=virtio,name=virtio.0 -net
  tap,fd=27,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc
  0.0.0.0:4 -k en-us -vga vmware
 
 several of these vms are actually assigned less memory than is stated in
  -m, since I used the virt-manager interface to shrink memory size. awiki
  is set to 256MB, yet is still somehow using over 800MB of virt? one of
  the anon maps in pmap shows up as nearly 512MB (544788K). The rest of
  the vms show oddities like that as well.
 
 host is debian sid with the 2.6.31-2-amd64 kernel, kvm --version reports:
 
 QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
 
 and just for kicks:
 
 r...@boris:~# free -m
  total   used   free sharedbuffers cached
 Mem:  3964   3891 72  0108   1686
 -/+ buffers/cache:   2096   1867
 Swap

Re: FreeBSD guest hogs cpu after disk delay?

2010-01-04 Thread Thomas Fjellstrom
On Mon January 4 2010, Gleb Natapov wrote:
 On Mon, Jan 04, 2010 at 08:06:15AM -0700, Thomas Fjellstrom wrote:
  On Sun January 3 2010, Thomas Fjellstrom wrote:
   On Sun January 3 2010, Thomas Fjellstrom wrote:
I have a strange issue, one of my free bsd guests started using up
100% cpu and wouldnt respond in the console after a md-raid check
started on the raid1 volume the vm has its lvm volumes on. About
the only thing I could do was force the vm off, and restart it. In
the guests console there was some kind of DMA warning/error related
to the guest's disk saying it would retry, but it seems it never
got that far.
  
   I forgot to mention, the host is running debian sid with kernel
   2.6.31-1- amd64, and kvm --version reports:
  
   QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
  
   the hosts / and the vm volumes all sit on a lvm volume group, ontop
   of a md- raid1 mirror of two Seagate 7200.12 500GB SATAII drives.
  
   The host is running 7 other guest which all seem to be running
   smoothly, except both freebsd (pfSense) based guests which seemed to
   have locked up after this message:
  
   ad0: TIMEOUT - WRITE_DMA retrying (1 retry left) LBA=16183439
  
   Though the second freebsd guest didn't seem to use up nearly as much
   cpu, but it uses up far lower resources than the first.
 
  No one have an idea whats going wrong? All of my virtio based linux
  guests stayed alive. But both of my FreeBSD guests using whatever ide
  in the - drive option sets locked up solid.
 
 Can you try more recent version of kvm?

This is a production machine that I'd really like not to have to reboot, 
or stop the vms in any way. But qemu-kvm-0.11.0 seems to exist in apt now, 
so I might upgrade soonish (the kvm package seems to have been removed from 
sid, so it hasn't been upgrading).

Also grub2 seems to be having issues on it, so I'm afraid to reboot at all. 
All of a sudden last month it started to refuse to update itself properly. 
who knows if the box even boots at this point ::)

 --
   Gleb.
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory under KVM?

2009-12-16 Thread Thomas Fjellstrom
On Wed December 16 2009, Avi Kivity wrote:
 On 12/16/2009 01:21 AM, Thomas Fjellstrom wrote:
  The problem is it should be automatic. The balloon driver itself or
  some other mechanism should be capable of noticing when it can free
  up a bunch of guest memory. I can't be bothered to manually sit
  around and monitor memory usage on my host so I can then go into
  virt-manager to reduce memory to each guest.
 
  That should be pretty easy though it will have an effect on guest
  performance.
 
  As long as its only done after an appropriately long idle period (ie:
  theres been X MB's free for a long time, give it back), I can't see it
  harming performance too much. At least not more than setting ram too
  low when manually (de)ballooning memory.
 
 It depends on what your expectations are.  If you have a lot of memory
 you might be surprised when you access an idle guest and have to wait
 for it to page itself back from disk.
 

Why would it be swaping in that case? Only unused/free/cache memory should 
be returned to the host.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory under KVM?

2009-12-16 Thread Thomas Fjellstrom
On Wed December 16 2009, Avi Kivity wrote:
 On 12/16/2009 11:58 AM, Thomas Fjellstrom wrote:
  It depends on what your expectations are.  If you have a lot of memory
  you might be surprised when you access an idle guest and have to wait
  for it to page itself back from disk.
 
  Why would it be swaping in that case? Only unused/free/cache memory
  should be returned to the host

Unless of course you were referring to the case of manually de-ballooning 
memory in the guests. Yes, swapping in the guests is slow, and you should 
try not to set the memory limit (-m) too small for a given workload.

Having a dynamic ballooning feature that did not actually change the guests 
view of ram wouldn't have that problem, especially since you're not 
returning any memory that's in use in the guest. And since KVM already 
supports running with large ranges of its assigned memory not actually 
assigned to it, dynamic ballooning probably isn't hard to support.

The memory over-commit rate on my old setup was rather astonishing. A 
couple of my guests would eventually get as low as showing 10MB ram in use. 
Even the larger memory users would get down as low as 1/5th the allocated 
ram after sitting mostly idle for a while. But since the full assigned ram 
is sometimes needed, just reducing the total assignment isn't a good option.

 Right, it would return cache memory, and when you use the guest next 
 time, it will have to refill its cache.

Sure, but there are hours where the guests can run with minimal memory use. 
It would allow one to run many more guests at the same time, if you know 
some/many of them won't always be using all of their assigned ram.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory under KVM?

2009-12-15 Thread Thomas Fjellstrom
On Tue December 15 2009, Avi Kivity wrote:
 On 12/13/2009 07:16 PM, Thomas Fjellstrom wrote:
  Linux usually keeps very little RAM free (it's kept as cache).  So
  there has to be some action on the part of the host to get the guest
  to free things.  For Windows guests you can use ksm to reclaim free
  memory (since Windows will zero it).
 
  I'm waiting for 2.6.32 to hit Debian Sid before I start playing with
  ksm (I don't think its in 2.6.31).
 
  The problem is it should be automatic. The balloon driver itself or
  some other mechanism should be capable of noticing when it can free up
  a bunch of guest memory. I can't be bothered to manually sit around and
  monitor memory usage on my host so I can then go into virt-manager to
  reduce memory to each guest.
 
 That should be pretty easy though it will have an effect on guest
 performance.
 

As long as its only done after an appropriately long idle period (ie: theres 
been X MB's free for a long time, give it back), I can't see it harming 
performance too much. At least not more than setting ram too low when 
manually (de)ballooning memory.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory under KVM?

2009-12-14 Thread Thomas Fjellstrom
On Mon December 14 2009, rek2 wrote:
  VIRT includes a lot of shared memory, so it's not a very useful number
  to look at when trying to gauge how much memory a process is using.
 
 Ok, so then what stats should we look to calculate the amount of memory
 a server should have depending on how many guests we will like to use?
 
 Thanks again
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

If you want to be extra careful, you'll want to get as much ram as you need 
for all the guests, and the host, and probably leave a little free for the 
host to use as a disk cache.

At least with the kvm setup I have, there is no way to overcommit memory, 
other than to use the hosts swap. Things are pretty tight memory wise for 
me, I'm hoping KSM in 2.6.32 will help alleviate matters some what.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory under KVM?

2009-12-13 Thread Thomas Fjellstrom
On Sun December 13 2009, Avi Kivity wrote:
 On 12/12/2009 10:37 PM, Thomas Fjellstrom wrote:
  I have the opposite happen, when a VM is started, RES is usually lower
  than -m, which I find slightly odd. But makes sense if qemu/kvm don't
  actually allocate memory from the host till its requested the first
  time
 
 That is the case.
 
  (if only it
  would return some of it afterwards, it would be even better).
 
 Use the balloon driver to return memory to the host.

Will it actually just free the memory and leave the total memory size in the 
VM alone? Last I checked it would just decrease the total memory size, which 
isn't that useful. Sometimes it needs more ram, so its given 512M ram, but 
most of the time can live on 100M or so.

  I just fully shut down and restarted on of my vms, which is set to use
  128-256 MB ram max. RES is like 72MB on start, and VIRT is 454M. RES
  generally gets up around 120MB ram when its doing something.
 
  One thing I do find a little odd is one of my VMs which is allocated
  512MB ram, has a VIRT of 826MB ram. I didn't realize that qemu had so
  many lib dependencies.
 
 It's not just libraries, it's mostly glibc malloc() allocating huge
 pools per thread, as well as large thread stacks.
 
  Due to kvm not supporting giving memory back, besides by
  swapping large portions of unused guest ram, my host currently has over
  1G used swap. Not particularly happy with that, but it doesn't seem to
  effect performance too much (except that it generally likes to swap
  host processes first, guest performance is decent, but host, not so
  much).
 
 The Linux vm prefers anonymous memory, so guests do get an advantage.
 

I think the only thing I'd like to have now is automatic memory return, much 
like vmware server has. It doesn't change what the guest VM sees, it just 
flushes the unused ram back to the host.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory under KVM?

2009-12-13 Thread Thomas Fjellstrom
On Sun December 13 2009, Avi Kivity wrote:
 On 12/13/2009 06:41 PM, Thomas Fjellstrom wrote:
  Use the balloon driver to return memory to the host.
 
  Will it actually just free the memory and leave the total memory size
  in the VM alone? Last I checked it would just decrease the total memory
  size, which isn't that useful. Sometimes it needs more ram, so its
  given 512M ram, but most of the time can live on 100M or so.
 
 If you balloon and than balloon back the guest will be able to
 reallocate all this memory.
 
  The Linux vm prefers anonymous memory, so guests do get an advantage.
 
  I think the only thing I'd like to have now is automatic memory return,
  much like vmware server has. It doesn't change what the guest VM sees,
  it just flushes the unused ram back to the host.
 
 Linux usually keeps very little RAM free (it's kept as cache).  So there
 has to be some action on the part of the host to get the guest to free
 things.  For Windows guests you can use ksm to reclaim free memory
 (since Windows will zero it).
 

I'm waiting for 2.6.32 to hit Debian Sid before I start playing with ksm (I 
don't think its in 2.6.31).

The problem is it should be automatic. The balloon driver itself or some 
other mechanism should be capable of noticing when it can free up a bunch of 
guest memory. I can't be bothered to manually sit around and monitor memory 
usage on my host so I can then go into virt-manager to reduce memory to each 
guest.

What vmware server had worked great. After some time (quite a lot usually) 
something would flush the cache, and return most of the unused guest ram 
back to the host.

Also, I don't have any windows guests atm. just 2 BSDs and 5 linux guests. 
I've had to do some tweaking guest side to cut down on ram (reduce apache, 
mysql, nginx and other services threads/forks). Something that wasn't 
necessary at all with vmware server.

As I said before, that last feature is pretty much the last thing that would 
make KVM perfect for my purposes. that is, returning guest memory without 
actually changing the allocation in the guest.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory under KVM?

2009-12-13 Thread Thomas Fjellstrom
On Sun December 13 2009, rek2 wrote:
 Hi Thanks for the responses, but look:
 PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
 /usr/bin/kvm -S -M pc-0.11 -m 1024 -smp 1 -name vm_hsci -uuid
 52ed4c7c-65e4-325e-0f96-87a5be6d854c -monitor
 unix:/var/run/libvirt/qemu/vm_hsci.monitor,server,nowait -boot c -drive
 file=/var/kvm_images/vm_hsci.img,if=virtio,index=0,boot=on -net
 nic,macaddr=00:16:36:5b:c4:e2,vlan=0,name=nic.0 -net
 tap,fd=16,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc
 127.0.0.1:0 -k en-us -vga cirrus -soundhw es1370
 
 I have -m 1ith 1024 right?
 but if I do a top:
 1214m 364m 3360 S0  4.5  46:55.24 kvm
 
 so is clearly above -m
 what could be the issue?
 Thanks

VIRT includes a lot of shared memory, so it's not a very useful number to 
look at when trying to gauge how much memory a process is using.

 
 is using 1214 at this moment sometimes it goes up a lot more..
 
 On 12/13/09 11:41 AM, Thomas Fjellstrom wrote:
  On Sun December 13 2009, Avi Kivity wrote:
  On 12/12/2009 10:37 PM, Thomas Fjellstrom wrote:
  I have the opposite happen, when a VM is started, RES is usually
  lower than -m, which I find slightly odd. But makes sense if qemu/kvm
  don't actually allocate memory from the host till its requested the
  first time
 
  That is the case.
 
  (if only it
  would return some of it afterwards, it would be even better).
 
  Use the balloon driver to return memory to the host.
 
  Will it actually just free the memory and leave the total memory size
  in the VM alone? Last I checked it would just decrease the total memory
  size, which isn't that useful. Sometimes it needs more ram, so its
  given 512M ram, but most of the time can live on 100M or so.
 
  I just fully shut down and restarted on of my vms, which is set to
  use 128-256 MB ram max. RES is like 72MB on start, and VIRT is 454M.
  RES generally gets up around 120MB ram when its doing something.
 
  One thing I do find a little odd is one of my VMs which is allocated
  512MB ram, has a VIRT of 826MB ram. I didn't realize that qemu had so
  many lib dependencies.
 
  It's not just libraries, it's mostly glibc malloc() allocating huge
  pools per thread, as well as large thread stacks.
 
  Due to kvm not supporting giving memory back, besides by
  swapping large portions of unused guest ram, my host currently has
  over 1G used swap. Not particularly happy with that, but it doesn't
  seem to effect performance too much (except that it generally likes
  to swap host processes first, guest performance is decent, but host,
  not so much).
 
  The Linux vm prefers anonymous memory, so guests do get an advantage.
 
  I think the only thing I'd like to have now is automatic memory return,
  much like vmware server has. It doesn't change what the guest VM sees,
  it just flushes the unused ram back to the host.
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory under KVM?

2009-12-12 Thread Thomas Fjellstrom
On Sat December 12 2009, Avi Kivity wrote:
 On 12/11/2009 11:43 PM, rek2 wrote:
  Hi everyone, I'm new to the list and I have a couple questions that we
  are wondering about here at work...
  we have notice that the KVM processes on the host take much more
  memory than the memory we have told the VM to use..  a ruff example..
  if we tell KVM to use 2 gigs for one VM it will end up showing on the
  host process list for that VM like 3 gigs or more...
  Why do I ask this? well we need to figure out how much memory to add
  to our host server so we can calculate the number of VM's we can run
  there etc etc..
 
 Can you give an example?  A snapshot from 'top' would do.
 

I have the opposite happen, when a VM is started, RES is usually lower than 
-m, which I find slightly odd. But makes sense if qemu/kvm don't actually 
allocate memory from the host till its requested the first time (if only it 
would return some of it afterwards, it would be even better).

I just fully shut down and restarted on of my vms, which is set to use 
128-256 MB ram max. RES is like 72MB on start, and VIRT is 454M. RES 
generally gets up around 120MB ram when its doing something.

One thing I do find a little odd is one of my VMs which is allocated 512MB 
ram, has a VIRT of 826MB ram. I didn't realize that qemu had so many lib 
dependencies. Due to kvm not supporting giving memory back, besides by 
swapping large portions of unused guest ram, my host currently has over 1G 
used swap. Not particularly happy with that, but it doesn't seem to effect 
performance too much (except that it generally likes to swap host processes 
first, guest performance is decent, but host, not so much).

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Virtualization Performance: Intel vs. AMD

2009-11-15 Thread Thomas Fjellstrom
On Sun November 15 2009, Neil Aggarwal wrote:
  The Core i7 has hyperthreading, so you see 8 logical CPUs.
 
 Are you saying the AMD processors do not have hyperthreading?

Course not. Hyperthreading is dubious at best.

 I have a machine with two six-core AMD Opterons.
 top shows me 12 logical CPUs.

If it had Hyperthreading, you'd see 24 logical cpus.
6 + 6 == 12 * 2(ht) == 24.

Those six cores in each cpu are actual physcial cores. Not fake logical 
cores.

   Neil
 
 
 --
 Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net
 CentOS 5.4 VPS with unmetered bandwidth only $25/month!
 7 day no risk trial, Google Checkout accepted
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Virtualization Performance: Intel vs. AMD

2009-11-15 Thread Thomas Fjellstrom
On Sun November 15 2009, Gordan Bobic wrote:
 Thomas Fjellstrom wrote:
  On Sun November 15 2009, Neil Aggarwal wrote:
  The Core i7 has hyperthreading, so you see 8 logical CPUs.
 
  Are you saying the AMD processors do not have hyperthreading?
 
  Course not. Hyperthreading is dubious at best.
 
 That's a rather questionable answer to a rather broad issue. SMT is
 useful, especially on processors with deep pipelines (think Pentium 4 -
 and in general, deeper pipelines tend to be required for higher clock
 speeds), because it reduces the number of context switches. Context
 switches are certainly one of the most expensive operations if not the
 most expensive operation you can do on a processor, and typically
 requires flushing the pipelines. Double the number of hardware threads,
 and you halve the number of context switches.

Hardware context switches aren't free either. And while it really has 
nothing to do with this discussion, the P4 arch was far from perfect (many 
would say, far from GOOD).

 This typically isn't useful if your CPU is processing one
 single-threaded application 99% of the time, but on a loaded server it
 can make a significant difference to throughput.

I'll buy that. Though you'll have to agree that the initial Hyperthread 
implementation in intel cpus was really bad. I hear good things about the 
latest version though.

But hey, if you can stick more cores in, or do what AMD is doing with its 
upcoming line, why not do that? Hyperthreading seems like more of a gimmick 
than anything. What seems to help the most with the new Intel arch is the 
auto overclocking when some cores are idle. Far more of a performance 
improvement than Hyperthreading will ever be it seems.

But maybe that's just me.

 Gordan
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Reserve CPU cores for specific guests?

2009-11-08 Thread Thomas Fjellstrom
On Sun November 8 2009, Neil Aggarwal wrote:
  I think you can achieve that on some simple level DIY with
  taskset from
  util-linux(-ng).
 
 That is a good utility to know.  I did not know about that
 earlier.  Thanks for the info.
 
 I am wondering one thing though:
 
 I will either need to call taskset when executing the
 process or run taskset on a PID after it starts up.
 
 Unless there is a way to tell KVM to call taskset when starting
 a guest, I think that is going to be hard to automate since the
 guests will get different PID each time they are started.
 
 Any suggestions?


None directly related, but libvirt's kvm support supports pinning a vm to a 
physical cpu. At least it has the option in virt-manager.

 Thanks,
   Neil
 
 --
 Neil Aggarwal, (281)846-8957, http://www.JAMMConsulting.com
 CentOS 5.4 KVM VPS $55/mo, no setup fee, no contract, dedicated 64bit CPU
 1GB dedicated RAM, 40GB RAID storage, 500GB/mo premium BW, Zero downtime
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: [PATCH] Add VirtIO Frame Buffer Support

2009-11-07 Thread Thomas Fjellstrom
On Tue November 3 2009, Avi Kivity wrote:
 On 11/03/2009 01:25 PM, Vincent Hanquez wrote:
  not sure if i'm missing the point here, but couldn't it be
  hypothetically extended to stuff 3d (or video  more 2d accel ?)
  commands too ? I can't imagine the cirrus or stdvga driver be able to
  do that ever ;)
 
 cirrus has pretty good 2d acceleration.  3D is a mega-project though.
 

You're kidding right? Why do I have to switch to vmware-vga so dmesg in a 
guest doesn't take a couple minutes to scroll by? Maybe a configuration 
issue?

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: automatic memory ballooning?

2009-08-17 Thread Thomas Fjellstrom
On Sun August 16 2009, Dor Laor wrote:
 On 08/16/2009 05:18 PM, Thomas Fjellstrom wrote:
  On Sun August 16 2009, Avi Kivity wrote:
  On 08/16/2009 12:55 PM, Thomas Fjellstrom wrote:
  I'm wondering if kvm supports automatic memory ballooning. I've had a
  kvm guest running for a couple days, and the balloon driver was loaded,
  and I could manually change the amount of ram it had allocated in the
  console, but it never seemed to change automatically.
 
  Is there any support for that?
 
  That would be part of a management application.  qemu only knows about
  the guest it controls, while ballooning needs a global view of the
  system.
 
  All a single guest needs to do is only use as much ram as it needs at any
  given time (up to the max allocated). So if the guest hasn't used much
  ram in a given time frame, free the free ram from the host, and only
  reallocate when needed. It doesn't _need_ a management application, just
  happens to be the way people do it.

 This is far from being an accurate description of the reality ( ;) )
 You cannot just expect the guest to do so. The guest has page cache that
 uses memory, it might run many processes that consume lots of memory, etc.
 Even if you could have done it, the translation between the guest-host
 is not 1-1 and the host needs to be aware of the guest memory usage.

 This is what ballooning does. A target is determined by host management
 daemon. As a response, the guest balloon driver try to allocate memory
 and pass it as Guest Physical Addresses to the host. Now the host can
 use madvise in order to mark these pages as not needed (and free the mmu
 of pinning them).

 The complexity is for the management to dynamically shift memory between
 the host and the guest to reach maximum performance.

 Regards,
 Dor
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

One thing I found odd about kvm's ballooning is that it actually seems to 
change how much ram the guest has. I really didn't expect free -m to report 
that the guest only had 64M ram after I manually ballooned the ram. I was 
however expecting it just to free ram it wasn't using in the host. To me, it 
just doesn't seem to be the same thing. now it'll start swapping at 64M ram 
instead of just reallocating the ram it used to have.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: automatic memory ballooning?

2009-08-17 Thread Thomas Fjellstrom
On Mon August 17 2009, Avi Kivity wrote:
 On 08/17/2009 01:49 PM, Thomas Fjellstrom wrote:
  One thing I found odd about kvm's ballooning is that it actually seems to
  change how much ram the guest has. I really didn't expect free -m to
  report that the guest only had 64M ram after I manually ballooned the
  ram. I was however expecting it just to free ram it wasn't using in the
  host. To me, it just doesn't seem to be the same thing. now it'll start
  swapping at 64M ram instead of just reallocating the ram it used to have.

 You expectations aren't realistic.  kvm never allocates the ram the
 guest doesn't use in the first place.

Really? So htop is lying to me then? I gave 1G ram to a kvm linux guest using 
virtio (disk, net, ballooning), and RES clearly said 1G, and VIRT actually 
said somewhere around 1.3 to 1.6G. It stayed that way for over a day, and the 
guest did nothing the entire time.

I know the kernel lies a little bit about ram usage, but it seems at least 
with kvm, the ram is in use when it says it is, while with vmware, it usually 
isn't.

 Ballooning just the free
 memory is pointless since it's usually a very small amount.

 It may be worthwhile for the guest to give up that memory voluntarily
 though.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: automatic memory ballooning?

2009-08-17 Thread Thomas Fjellstrom
On Sun August 16 2009, Avi Kivity wrote:
 On 08/16/2009 12:55 PM, Thomas Fjellstrom wrote:
  I'm wondering if kvm supports automatic memory ballooning. I've had a kvm
  guest running for a couple days, and the balloon driver was loaded, and I
  could manually change the amount of ram it had allocated in the console,
  but it never seemed to change automatically.
 
  Is there any support for that?

 That would be part of a management application.  qemu only knows about
 the guest it controls, while ballooning needs a global view of the system.

Where can I find such a management application?

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: automatic memory ballooning?

2009-08-16 Thread Thomas Fjellstrom
On Sun August 16 2009, Avi Kivity wrote:
 On 08/16/2009 12:55 PM, Thomas Fjellstrom wrote:
  I'm wondering if kvm supports automatic memory ballooning. I've had a kvm
  guest running for a couple days, and the balloon driver was loaded, and I
  could manually change the amount of ram it had allocated in the console,
  but it never seemed to change automatically.
 
  Is there any support for that?

 That would be part of a management application.  qemu only knows about
 the guest it controls, while ballooning needs a global view of the system.

All a single guest needs to do is only use as much ram as it needs at any 
given time (up to the max allocated). So if the guest hasn't used much ram in 
a given time frame, free the free ram from the host, and only reallocate 
when needed. It doesn't _need_ a management application, just happens to be 
the way people do it.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recent kvm and vmware server comparisons?

2009-02-19 Thread Thomas Fjellstrom
On Thursday 19 February 2009, Hans de Bruin wrote:
 Martin Maurer wrote:
  I suppose no-one has any?
 
  VMware includes in its EULA (End User License Agreement) a prohibition
  for any licensee to publish benchmark results without VMware's approval.
  (see https://www.vmware.com/tryvmware/eula.php)
 
  Maybe this is a reason why all published VMWare benchmarks looks quite
  similar :-)
 
  I would love to see a comparison but due to this restrictions it´s hard
  to get independent results.

 Why compare kvm to vmware and not to real hardware? The results can than
 be compared to vmware/hardware and hyper-v/hardware.

hyper-v doesn't provide network or disk io ;)

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recent kvm and vmware server comparisons?

2009-02-18 Thread Thomas Fjellstrom
On Tuesday 10 February 2009, Thomas Fjellstrom wrote:
 I've temporarily got vmware server running on my new server, and intend
 to migrate over to kvm as soon as possible, if it provides enough incentive
 (extra performance, features). Currently I'm waiting for full iommu support
 in the kernel, modules and userspace, and didn't plan to migrate till I had
 hardware that could do iommu, kvm fully supported iommu + DMA for devices
 passed through, could also pass through more than one device per guest (I
 saw hints that the intel iommu implementation can only do one device per
 guest? please tell me I'm wrong, it seems like an odd design choice to
 make), and full migration.

 But if I can get enough performance over vmware server 2 with plain old kvm
 + virtio, I'd happily migrate.

 I saw a message late last year comparing the two, but I know how quickly
 things change in the OSS world, and I also intend to use raw devices
 (possibly AoE) for guest disks (not qcow or anything like it), and virtio
 for networking.

 So has anyone tested the two lately? Got any experiences you'd like to
 share?

I suppose no-one has any?

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recent kvm and vmware server comparisons?

2009-02-18 Thread Thomas Fjellstrom
On Wednesday 18 February 2009, Martin Maurer wrote:
  I suppose no-one has any?

 VMware includes in its EULA (End User License Agreement) a prohibition for
 any licensee to publish benchmark results without VMware's approval. (see
 https://www.vmware.com/tryvmware/eula.php)

 Maybe this is a reason why all published VMWare benchmarks looks quite
 similar :-)

 I would love to see a comparison but due to this restrictions it´s hard to
 get independent results.

 Br, Martin


I hardly think it stops people from casually talking about their day to day 
experiences with vmware and how kvm matches up to it. And even if it did, it 
doesn't sound like something thats actually legally binding. Otherwise I can 
start putting things like YOU MUST NEVER TALK AGAIN in my eulas.

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Recent kvm and vmware server comparisons?

2009-02-10 Thread Thomas Fjellstrom
I've temporarily got vmware server running on my new server, and intend to 
migrate over to kvm as soon as possible, if it provides enough incentive 
(extra performance, features). Currently I'm waiting for full iommu support in 
the kernel, modules and userspace, and didn't plan to migrate till I had 
hardware that could do iommu, kvm fully supported iommu + DMA for devices 
passed through, could also pass through more than one device per guest (I 
saw hints that the intel iommu implementation can only do one device per 
guest? please tell me I'm wrong, it seems like an odd design choice to make), 
and full migration.

But if I can get enough performance over vmware server 2 with plain old kvm + 
virtio, I'd happily migrate.

I saw a message late last year comparing the two, but I know how quickly 
things change in the OSS world, and I also intend to use raw devices 
(possibly AoE) for guest disks (not qcow or anything like it), and virtio for 
networking.

So has anyone tested the two lately? Got any experiences you'd like to share?

-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of pci passthrough work?

2008-09-27 Thread Thomas Fjellstrom
On Saturday 27 September 2008, Han, Weidong wrote:
 Hi Thomas,

 the patches of passthrough/VT-d on kvm.git are already checked in. With
 Amit's userspace patches, you can assign device to guest. You can have a
 try.

Does that mean I need VT-d support in hardware? All I have to test with right 
now is an AMD Phenom X4  with a 780g+sb700 system. Don't think it has an 
iommu, and I'd find it odd if the intel VT-d code just worked on amd's 
hardware.

 Randy (Weidong)

 Thomas Fjellstrom wrote:
  I'm very interested in being able to pass a few devices through to
  kvm guests. I'm wondering what exactly is working now, and how I can
  start testing it?
 
  the latest kvm release doesn't seem to include any support for it in
  userspace, so I can't test it with that...
 
  Basically what I want to do is assign a two or three physical nics
  (100mb and GiB) to one vm, some tv tuner cards to another.
 
  Also, I'm wondering if AMD's iommu in the SB750 southbridge is
  supported yet? Or if anyone is working on it?
 
  --
  Thomas Fjellstrom
  [EMAIL PROTECTED]

 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Thomas Fjellstrom
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of pci passthrough work?

2008-09-27 Thread Thomas Fjellstrom
On Saturday 27 September 2008, Han, Weidong wrote:
 Thomas Fjellstrom wrote:
  On Saturday 27 September 2008, Han, Weidong wrote:
  Hi Thomas,
 
  the patches of passthrough/VT-d on kvm.git are already checked in.
  With Amit's userspace patches, you can assign device to guest. You
  can have a try.
 
  Does that mean I need VT-d support in hardware? All I have to test
  with right now is an AMD Phenom X4  with a 780g+sb700 system. Don't
  think it has an iommu, and I'd find it odd if the intel VT-d code
  just worked on amd's hardware.

 Yes, currently you need VT-d support in hardware to assign device.

So I take it the PV-DMA (or pv-dma doesn't do what I think it does...) or the 
other 1:1 device pass through work isn't working right now?

It's something I'd really like to use, but I don't have access to a platform 
with a hardware iommu. Though I might be able to pick up a replacement board 
for my new server with the SB750 southbridge which supposedly has AMD's new 
iommu hardware in it, but I haven't seen any evidence that kvm or linux 
supports it.

 Randy (Weidong)

  Randy (Weidong)
 
  Thomas Fjellstrom wrote:
  I'm very interested in being able to pass a few devices through to
  kvm guests. I'm wondering what exactly is working now, and how I
  can start testing it?
 
  the latest kvm release doesn't seem to include any support for it in
  userspace, so I can't test it with that...
 
  Basically what I want to do is assign a two or three physical nics
  (100mb and GiB) to one vm, some tv tuner cards to another.
 
  Also, I'm wondering if AMD's iommu in the SB750 southbridge is
  supported yet? Or if anyone is working on it?
 
  --
  Thomas Fjellstrom
  [EMAIL PROTECTED]
 
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to [EMAIL PROTECTED]
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
  --
  Thomas Fjellstrom
  [EMAIL PROTECTED]


-- 
Thomas Fjellstrom
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of pci passthrough work?

2008-09-27 Thread Thomas Fjellstrom
On Saturday 27 September 2008, Jan C. Bernauer wrote:
 Hi,

  I have about the same problem, so excuse me for hijacking this thread.

 My hardware consists of a 780g/SB700 Mainboard and a 4850e AMD CPU, and
 I'm interested in forwarding a DVB-C tuner card to the guest. Maybe
 some NICs later.

 I tried and 'sort of' got it working with Amit's kernel and userspace
 tools.
 First thing:
 The dvb-c card has an interesting memory mapping, as reported by
 lspci -v:
 Memory at cfdff000 (32-bit, non-prefetchable) [size=512]

 Size 512 doesn't fly with a check in kvm_main.c:
 if (mem-memory_size  (PAGE_SIZE - 1))
 goto out;

 So I patched the userspace utilities to use 4096 instead.

 With that patch, the guest saw the card, the driver got loaded,
 and channel tuning works, but I get some i2c timeouts on the
 guest side, and the host side has errors like:

 [ cut here ]
 Sep 22 02:28:54 [kernel] WARNING: at kernel/irq/manage.c:180
 enable_irq+0x3a/0x55()
 Sep 22 02:28:54 [kernel] Unbalanced enable for IRQ 20
 Sep 22 02:28:54 [kernel] Modules linked in: sha256_generic cbc dm_crypt
 crypto_blkcipher kvm_amd kvm bridge stp llc stv0297 budget_core dvb_core
 saa7146 ttpci_eepr\
 om ir_common k8temp i2c_core dm_snapshot dm_mirror dm_log scsi_wait_scan
 [last unloaded: budget_ci]
 Sep 22 02:28:54 [kernel] Pid: 5283, comm: qemu-system-x86 Tainted: G
 W 2.6.27-rc5-11874-g19561b6 #11
 Sep 22 02:28:54 [kernel] Call Trace:
 Sep 22 02:28:54 [kernel]  [80238b04] warn_slowpath+0xb4/0xdc
 Sep 22 02:28:54 [kernel]  [8026b099]
 __alloc_pages_internal+0xde/0x419
 Sep 22 02:28:54 [kernel]  [802758d0] get_user_pages+0x401/0x4ae
 Sep 22 02:28:54 [kernel]  [80349269] __next_cpu+0x19/0x26
 Sep 22 02:28:54 [kernel]  [80230ce2]
 find_busiest_group+0x315/0x7c3
 Sep 22 02:28:54 [kernel]  [a005de31] gfn_to_hva+0x9/0x5d [kvm]
 - Last output repeated twice -
 Sep 22 02:28:54 [kernel]  [a005e01b]
 kvm_read_guest_page+0x34/0x46 [kvm]
 Sep 22 02:28:54 [kernel]  [a005e06c] kvm_read_guest+0x3f/0x7c
 [kvm]
 Sep 22 02:28:54 [kernel]  [a0068bfe]
 paging64_walk_addr+0xe0/0x2c1 [kvm]
 Sep 22 02:28:54 [kernel]  [80260d59] enable_irq+0x3a/0x55
 Sep 22 02:28:54 [kernel]  [a006df50]
 kvm_notify_acked_irq+0x17/0x30 [kvm]
 Sep 22 02:28:54 [kernel]  [a00701c5]
 kvm_ioapic_update_eoi+0x2f/0x6e [kvm]
 Sep 22 02:28:54 [kernel]  [a006f6da]
 apic_mmio_write+0x24a/0x546 [kvm]
 Sep 22 02:28:54 [kernel]  [a006498d]
 emulator_write_emulated_onepage+0xa1/0xf3 [kvm]
 Sep 22 02:28:54 [kernel]  [802206f8]
 paravirt_patch_call+0x13/0x2b Sep 22 02:28:54 [kernel] 
 [a006c93e]
 x86_emulate_insn+0x366a/0x41de [kvm]
 Sep 22 02:28:54 [kernel]  [802206fa]
 paravirt_patch_call+0x15/0x2b Sep 22 02:28:54 [kernel] 
 [a005f90b]
 kvm_get_cs_db_l_bits+0x22/0x3a [kvm]
 Sep 22 02:28:54 [kernel]  [a006167d]
 emulate_instruction+0x198/0x25c [kvm]
 Sep 22 02:28:54 [kernel]  [a0067dfe]
 kvm_mmu_page_fault+0x46/0x83 [kvm]
 Sep 22 02:28:54 [kernel]  [a00636a9]
 kvm_arch_vcpu_ioctl_run+0x456/0x65c [kvm]
 Sep 22 02:28:54 [kernel]  [8024d605] hrtimer_start+0x111/0x133
 Sep 22 02:28:54 [kernel]  [a005d451] kvm_vcpu_ioctl+0xe0/0x459
 [kvm]
 Sep 22 02:28:54 [kernel]  [a005ee43] kvm_vm_ioctl+0x203/0x21b
 [kvm]
 Sep 22 02:28:54 [kernel]  [802353b6] finish_task_switch+0x2b/0xc4
 Sep 22 02:28:54 [kernel]  [8029e3b5] vfs_ioctl+0x21/0x6c
 Sep 22 02:28:54 [kernel]  [8029e627] do_vfs_ioctl+0x227/0x23d
 Sep 22 02:28:54 [kernel]  [8029e67a] sys_ioctl+0x3d/0x5f
 Sep 22 02:28:54 [kernel]  [8020b45a]
 system_call_fastpath+0x16/0x1b
 Sep 22 02:28:54 [kernel] ---[ end trace 7b8b990423985ddf ]---
 Sep 22 02:28:54 [kernel] [ cut here ]


 Xen works with that card, but Xen has other problems, and kvm is much
 nicer :) So if you need a guinea pig with basic debugging knowledge, I'm
 your man.

How did you manage to pull together those patches? They all seem so old, and 
won't likely apply cleanly to git head :(

 Best regards,
 Jan C. Bernauer


-- 
Thomas Fjellstrom
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of pci passthrough work?

2008-09-27 Thread Thomas Fjellstrom
On Saturday 27 September 2008, Jan C. Bernauer wrote:
 Thomas Fjellstrom wrote:
  So I've checked out both of those trees and used head, and kvm-userspace
  is erroring out:
 
  gcc -I. -I.. -I/root/kvm-amit-userspace/qemu/target-i386
  -I/root/kvm-amit- userspace/qemu -MMD -MT qemu-kvm-x86.o -MP -DNEED_CPU_H
  -D_GNU_SOURCE - D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D__user=
  -I/root/kvm-amit- userspace/qemu/tcg
  -I/root/kvm-amit-userspace/qemu/tcg/x86_64 -I/root/kvm-
  amit-userspace/qemu/fpu -DHAS_AUDIO -DHAS_AUDIO_CHOICE -I/root/kvm-amit-
  userspace/qemu/slirp -I /root/kvm-amit-userspace/qemu/../libkvm 
  -DCONFIG_X86 -Wall -O2 -g -fno-strict-aliasing  -m64 -I /root/kvm-amit-
  userspace/kernel/include -c -o qemu-kvm-x86.o /root/kvm-amit-
  userspace/qemu/qemu-kvm-x86.c
  /root/kvm-amit-userspace/qemu/qemu-kvm-x86.c:522: error:
  âKVM_FEATURE_CLOCKSOURCEâ undeclared here (not in a function)
  /root/kvm-amit-userspace/qemu/qemu-kvm-x86.c:525: error:
  âKVM_FEATURE_NOP_IO_DELAYâ undeclared here (not in a function)
  /root/kvm-amit-userspace/qemu/qemu-kvm-x86.c:528: error:
  âKVM_FEATURE_MMU_OPâ undeclared here (not in a function)
 
  So I'm a little stuck now.

 Try running
   make sync LINUX=/root/kvm-amit
 (or whatever your kernel source dir is)
 in the kernel sub directory of your kvm-amit-userspace dir.
 Those KVM_FEATURE_* should be defined somewhere in kvm_para.h,
 which is in there.

that leaves me with:

/root/kvm-amit-userspace/qemu/../libkvm/libkvm.h:28: warning: âstruct 
kvm_msr_entryâ declared inside parameter list   
  
/root/kvm-amit-userspace/qemu/../libkvm/libkvm.h:28: warning: its scope is 
only this definition or declaration, which is probably not what you want

and a bunch more errors.

 Best regards,
 Jan



 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Thomas Fjellstrom
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of pci passthrough work?

2008-09-27 Thread Thomas Fjellstrom
On Saturday 27 September 2008, Jan C. Bernauer wrote:
 Thomas Fjellstrom wrote:
  that leaves me with:
 
  /root/kvm-amit-userspace/qemu/../libkvm/libkvm.h:28: warning: âstruct
  kvm_msr_entryâ declared inside parameter list
  /root/kvm-amit-userspace/qemu/../libkvm/libkvm.h:28: warning: its scope
  is only this definition or declaration, which is probably not what you
  want
 
  and a bunch more errors.

 Well, these are warning, and I might have ignored them :)
 What are the errors?

 Anyway, I'll be off now, so I won't respond till tomorrow.

libkvm.h:404: error: expected '=', ',' ';', 'asm' or '__attribute__' before 
'kvm_get_cr8'   

libkvm.c:145: error: expected declaration specifiers or '...' before '__u32'  


and quite a few more after that.
 Best regards,
 Jan

-- 
Thomas Fjellstrom
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Status of pci passthrough work?

2008-09-26 Thread Thomas Fjellstrom
I'm very interested in being able to pass a few devices through to kvm guests. 
I'm wondering what exactly is working now, and how I can start testing it?

the latest kvm release doesn't seem to include any support for it in 
userspace, so I can't test it with that...

Basically what I want to do is assign a two or three physical nics (100mb and 
GiB) to one vm, some tv tuner cards to another.

Also, I'm wondering if AMD's iommu in the SB750 southbridge is supported yet? 
Or if anyone is working on it?

-- 
Thomas Fjellstrom
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html