Re: 0.12.x: message Option 'ipv4': Use 'on' or 'off'

2010-01-04 Thread Thomas Beinicke
I get the same message since the update, is it -chardev option related?


On Saturday 02 January 2010 11:53:42 Thomas Mueller wrote:
 hi
 
 since 0.12.x i get the following messages starting a vm:
 
 Option 'ipv4': Use 'on' or 'off'
 Failed to parse yes for dummy.ipv4
 
 
 command is:
 
 
 kvm -usbdevice tablet -drive file=~/virt/xp/drive1.qcow2,cache=writeback -
 drive file=~/virt/xp/drive2.qcow2,cache=writeback -net nic -net
 user,hostfwd=tcp:127.0.0.1:3389-:3389 -k de-ch -m 1024 -smp 2 -vnc
 127.0.0.1:20 -monitor unix:/var/run/kvm-winxp.socket,server,nowait -
 daemonize -localtime
 
 what is option ipv4?
 
 - Thomas
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: [PATCH v2] virtio-blk physical block size

2010-01-04 Thread Christoph Hellwig
On Mon, Jan 04, 2010 at 01:38:51PM +1030, Rusty Russell wrote:
 I thought this was what I was doing, but I have shown over and over that
 I have no idea about block devices.
 
 Our current driver treats BLK_SIZE as the logical and physical size (see
 blk_queue_logical_block_size).
 
 I have no idea what logical vs. physical actually means.  Anyone?  Most
 importantly, is it some Linux-internal difference or a real I/O-visible
 distinction?

Those should be the same for any sane interface.  They are for classical
disk devices with larger block sizes (MO, s390 dasd) and also for the
now appearing 4k sector scsi disks.  But in the ide world people are
concerned about dos/window legacy compatiblity so they came up with a
nasty hack:

 - there is a physical block size as used by the disk internally
   (4k initially)
 - all the interfaces to the operating system still happen in the
   traditional 512 byte blocks to not break any existing assumptions
 - to make sure modern operating systems can optimize for the larger
   physical sectors the disks expose this size, too.
 - even worse disks can also have alignment hacks for the traditional
   DOS partitions tables, so that the 512 byte block zero might even
   have an offset into the first larger physical block.  This is also
   exposed in the ATA identify information.

All in all I don't think this mess is a good idea to replicate in
virtio.  Virtio by defintion requires virtualization aware guests, so we
should just follow the SCSI way of larger real block sizes here.

 
 Rusty.
 
---end quoted text---
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH RFC] Advertise IDE physical block size as 4K

2010-01-04 Thread Christoph Hellwig
On Tue, Dec 29, 2009 at 02:39:38PM +0100, Luca Tettamanti wrote:
 Linux tools put the first partition at sector 63 (512-byte) to retain
 compatibility with Windows;

Well, some of them, and depending on the exact disks.  It's all rather
complicated.

  It has been discussed for hardware disk design with 4k sectors, and
  somehow there were plans to map sectors so that the Linux partition
  scheme results in nicely aligned filesystem blocks
 
 Ugh, I hope you're wrong ;-) AFAICS remapping will lead only to
 headaches... Linux does not have any problem with aligned partitions.

Linux doesn't care.  As doesn't windows.  But performance on mis-aligned
partitions will suck badly - both on 4k sector drives, SSDs or probably
various copy on write layers in virtualization once you hit the worst
case.  Fortunately the block topology information present in recent
ATA and SCSI standards allows the storage hardware to tell about the
required alignment, and Linux now has a topology API to expose it, which
is used by the most recent versions of the partitioning tools and
filesystem creation tools.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH RFC] Advertise IDE physical block size as 4K

2010-01-04 Thread Christoph Hellwig
On Tue, Dec 29, 2009 at 12:07:58PM +0200, Avi Kivity wrote:
 Guests use this number as a hint for alignment and I/O request sizes.  Given
 that modern disks have 4K block sizes, and cached file-backed images also
 have 4K block sizes, this hint can improve guest performance.
 
 We probably need to make this configurable depending on machine type.  It
 should be the default for -M 0.13 only as it can affect guest code paths.

The information is correct per the ATA spec, but:

 (a) as mentioned above it should not be used for old machine types
 (b) we need to sort out passing through the first block alignment bits
 that are also in IDENTIFY word 106 if using a raw block device
 underneat
 (b) probably need to adjust the physical blocks size depending on the
 underlying storage topology.

I have a patch in my queue for a while now dealing with (b) and parts of
(c), but it's been preempted by more urgent work.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Memory ballooning deactivated by the guest

2010-01-04 Thread Daniel Bareiro
Hi all!

I'm using Linux 2.6.32.2 with qemu-kvm-0.12.1.1 and, testing memory
ballooning, I obtained the following message:

r...@ubuntu:~# telnet localhost 5001
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
QEMU 0.12.1 monitor - type 'help' for more information
(qemu)
(qemu)
(qemu)
(qemu) info balloon
The balloon device has not been activated by the guest


I have understood that to do balooning it is sufficient that kernel of
guest is compiled with virtio-balloon driver:

test:~# uname -a
Linux test 2.6.32-dgb #1 SMP Thu Dec 24 23:42:21 ART 2009 x86_64
GNU/Linux
test:~#
test:~#
test:~#
test:~# cat /boot/config-2.6.32-dgb | grep -i virtio
CONFIG_NET_9P_VIRTIO=m
CONFIG_VIRTIO_BLK=m
CONFIG_VIRTIO_NET=m
# CONFIG_VIRTIO_CONSOLE is not set
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_VIRTIO=m
CONFIG_VIRTIO_RING=m
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_BALLOON=m


Which can be the problem?

Thanks in advance.

Regards,
Daniel
-- 
Fingerprint: BFB3 08D6 B4D1 31B2 72B9  29CE 6696 BF1B 14E6 1D37
Powered by Debian GNU/Linux Lenny - Linux user #188.598


signature.asc
Description: Digital signature


[Autotest PATCH] KVM-test: Add a subtest image_copy

2010-01-04 Thread Yolkfull Chow
Add image_copy subtest for convenient KVM functional testing.

The target image will be copied into the linked directory if link 'images'
is created, and copied to the directory specified in config file otherwise.

Signed-off-by: Yolkfull Chow yz...@redhat.com
---
 client/tests/kvm/kvm_utils.py  |   64 
 client/tests/kvm/tests/image_copy.py   |   42 +
 client/tests/kvm/tests_base.cfg.sample |6 +++
 3 files changed, 112 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/image_copy.py

diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 2bbbe22..1e11441 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -924,3 +924,67 @@ def create_report(report_dir, results_dir):
 reporter = os.path.join(report_dir, 'html_report.py')
 html_file = os.path.join(results_dir, 'results.html')
 os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
+
+
+def is_dir_mounted(source, dest, type, perm):
+
+Check whether `source' is mounted on `dest' with right permission.
+
+@source: mount source
+@dest:   mount point
+@type:   file system type
+
+match_string = %s %s %s %s % (source, dest, type, perm)
+try:
+f = open(/etc/mtab, r)
+except IOError:
+pass
+mounted = f.read()
+f.close()
+if match_string in mounted: 
+return True
+return False
+
+
+def umount(mount_point):
+
+Umount `mount_point'.
+
+@mount_point: mount point
+
+cmd = umount %s % mount_point
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+logging.error(Fail to umount: %s % o)
+return False
+return True
+
+
+def mount(src, mount_point, type, perm = rw):
+
+Mount the src into mount_point of the host.
+
+@src: mount source
+@mount_point: mount point
+@type: file system type
+@perm: mount permission
+
+if is_dir_mounted(src, mount_point, type, perm):
+return True
+
+umount(mount_point)
+
+cmd = mount -t %s %s %s -o %s % (type, src, mount_point, perm)
+logging.debug(Issue mount command: %s % cmd)
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+logging.error(Fail to mount: %s  % o)
+return False
+
+if is_dir_mounted(src, mount_point, type, perm):
+logging.info(Successfully mounted %s % src)
+return True
+else:
+logging.error(Mount verification failed; currently mounted: %s %
+ file('/etc/mtab').read())
+return False
diff --git a/client/tests/kvm/tests/image_copy.py 
b/client/tests/kvm/tests/image_copy.py
new file mode 100644
index 000..800fb90
--- /dev/null
+++ b/client/tests/kvm/tests/image_copy.py
@@ -0,0 +1,42 @@
+import os, logging, commands
+from autotest_lib.client.common_lib import error
+import kvm_utils
+
+def run_image_copy(test, params, env):
+
+Copy guest images from NFS server.
+1) Mount the NFS directory
+2) Check the existence of source image
+3) If existence copy the image from NFS
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+mount_dest_dir = params.get(dst_dir,'/mnt/images')
+if not os.path.exists(mount_dest_dir):
+os.mkdir(mount_dest_dir)
+
+src_dir = params.get('nfs_images_dir')
+image_dir = os.path.join(os.environ['AUTODIR'],'tests/kvm/images')
+if not os.path.exists(image_dir):
+image_dir = os.path.dirname(params.get(image_name))
+
+image = os.path.split(params['image_name'])[1]+'.'+params['image_format']
+
+src_path = os.path.join(mount_dest_dir, image)
+dst_path = os.path.join(image_dir, image)
+
+if not kvm_utils.mount(src_dir, mount_dest_dir, nfs, ro):
+raise error.TestError(Fail to mount the %s to %s %
+  (src_dir, mount_dest_dir))
+  
+# Check the existence of source image
+if not os.path.exists(src_path):
+raise error.TestError(Could not found %s in src directory % src_path)
+
+logging.info(Copying image '%s'... % image)
+cmd = cp %s %s % (src_path, dst_path)
+s, o = commands.getstatusoutput(cmd)
+if s != 0:
+raise error.TestFail(Failed to copy image %s: %s % (cmd, o))
diff --git a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
index b8f25f4..bdeac19 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -61,6 +61,12 @@ variants:
 floppy = images/floppy.img
 extra_params +=  -boot d
 
+- image_copy:
+type = image_copy
+vms = ''
+# Here specify the NFS directory that contains all images
+nfs_images_dir = 
+
 - setup:install unattended_install
 type = steps
  

Re: Memory ballooning deactivated by the guest

2010-01-04 Thread Daniel Bareiro
Hi, Avi.

On Monday, 04 January 2010 11:30:23 +0200,
Avi Kivity wrote:

 I'm using Linux 2.6.32.2 with qemu-kvm-0.12.1.1 and, testing memory
 ballooning, I obtained the following message:

 r...@ubuntu:~# telnet localhost 5001
 Trying 127.0.0.1...
 Connected to localhost.
 Escape character is '^]'.
 QEMU 0.12.1 monitor - type 'help' for more information
 (qemu)
 (qemu)
 (qemu)
 (qemu) info balloon
 The balloon device has not been activated by the guest

 You need to add '-balloon virtio' to the qemu command line (the error
 message is incorrect).

What do you mean whereupon the error message is incorrect?

Before I didn't need to use '-balloon virtio', although this was when I
used kvm-88. This is a necessary requirement when using recent versions
of qemu-kvm?

If now it is necessary, it could be useful to indicate it in the online
documentation [1].

Thanks for your speedy reply.

Regards,
Daniel

[1] http://www.linux-kvm.org/page/Virtio
-- 
Fingerprint: BFB3 08D6 B4D1 31B2 72B9  29CE 6696 BF1B 14E6 1D37
Powered by Debian GNU/Linux Lenny - Linux user #188.598


signature.asc
Description: Digital signature


Re: Memory ballooning deactivated by the guest

2010-01-04 Thread Avi Kivity

On 01/04/2010 12:08 PM, Daniel Bareiro wrote:



(qemu) info balloon
The balloon device has not been activated by the guest
   

You need to add '-balloon virtio' to the qemu command line (the error
message is incorrect).
 

What do you mean whereupon the error message is incorrect?
   


It should say the balloon device is not activated by the host.


Before I didn't need to use '-balloon virtio', although this was when I
used kvm-88. This is a necessary requirement when using recent versions
of qemu-kvm?
   


Yes.


If now it is necessary, it could be useful to indicate it in the online
documentation [1].


[1] http://www.linux-kvm.org/page/Virtio
   


That page is more about I/O, ballooning deserves a page of its own.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Testing nested virtualization

2010-01-04 Thread Joerg Roedel
  Also I was trying to use qemu-kvm-0.12.1.1 with to Linux 2.6.32 in
  guest within 'test'. And here it happens something similar.
  Sometimes I get to select the option of the menu of the installer,
  but after boot, the installation is hung again.

The problem has probably to do with the TSC bugs I fixed lately for
nested SVM. You can try to disable KVM-Clock for the L1 and the L2 guest
or use the latest 2.6.32.x kernel on the Host. Does one of this fix the
issues for you?

Joerg


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: pci-stub error and MSI-X for KVM guest

2010-01-04 Thread Fischer, Anna
 Subject: Re: pci-stub error and MSI-X for KVM guest
 
 * Fischer, Anna (anna.fisc...@hp.com) wrote:
   Subject: Re: pci-stub error and MSI-X for KVM guest
This works fine in principle and I can see the PCI device in the
guest under lspci. However, the 82576 VF driver requires the OS
to support MSI-X. My Fedora installation is configured with MSI-X,
e.g. CONFIG_PCI_MSI is 'y'. When I load the driver it tells me it
   cannot
initialize MSI-X for the device, and under /proc/interrupts I can
 see
that MSI-X does not seem to work. Is this a KVM/QEMU limitation?
 It
   works
for me when running the VF driver under a non-virtualized Linux
 system.
  
   No, this should work fine.  QEMU/KVM supports MSI-X to guest as well
 as
   VFs.
 
  Actually, I just got this to work. However, it only works if I call
  qemu-kvm from the command line, while it doesn't work when I start
  the guest via the virt-manager. So this seems to be an issue with
  Fedora's virt-manager rather than with KVM/QEMU. If I call qemu-kvm
  from the command line then I get the pci-stub messages saying 'irq xx
  for MSI/MSI-x' when the guest boots up and the VF device works just
 fine
  inside the guest. When I start the guest using virt-manager then I
 don't
  see any of these irq allocation messages from pci-stub. Any idea what
  the problem could be here?
 
 No, sounds odd.  Can you:
 
   # virsh dumpxml [domain]
 
 and show the output of the hostdev XML section?

hostdev mode='subsystem' type='pci' managed='yes'
  source
address domain='0x' bus='0x03' slot='0x10' function='0x3'/
/source
/hostdev

The device to assign is at :03:10.3, dmesg shows:

pci-stub :03:10.3: enabling device ( - 0002)
assign device: host bdf = 3:10:3


 
Also, when I do an lspci on the KVM guest, that is fine, but when
 I
do an lspci -v then the guest crashes down. In the host OS under
 dmesg
I can see this:
   
pci-stub :03:10.0: restoring config space at offset 0x1 (was
   0x10, writing 0x14)
   
Is this a known issue? My qemu-kvm version is 2:0.11.0.
  
   No, I've not seen the crash before.  What do you mean the guest
 crashes
   down?
 
  So this also only happens when starting the guest using virt-manager.
 It
  works fine when starting qemu-kvm from the command line. This is weird
 as
  I call it with the same parameters as I can see virt-manager uses
 under
  'ps -ef | grep qemu'. The guest crashes down means that the QEMU
 process
  is terminated. I don't see anything in the logs. It just disappears.
 
 Ouch.  Can you do debuginfo-install qemu-system-x86 to get the debug
 packages, then attach gdb to the QEMU process so that when you do lspci
 -v
 in the guest (assuming this is QEMU segfaulting) you'll get a backtrace?

I don't know how I can tell virt-manager through the GUI to enable debug mode, 
e.g. call virt-manager with '-s'. From the command line I can attach gdb like 
this, but when running virt-manager from the GUI then I cannot connect to 
localhost:1234. However, the issues only arise when starting virt-manager from 
the GUI. I can't find the configuration option to somehow tell that I want it 
to be launched with '-s'?

 
   This looks like a Fedora specific version (rpm version).  Can you
 verify
   this is from Fedora packages vs. upstream source?  If it's Fedora,
   would be useful to open a bug there.
 
  Yes, I am using KVM/QEMU which ships with the Fedora Core 12
 distribution.
 
 OK, please file a bug there (and include the backtrace info).

I will file a bug once I get the full information. Currently my guess is 
actually that I might have package mismatches or so with libvirt or 
virt-manager or QEMU related software. The is my only explanation for why it 
works from the command line, but not from the GUI. Some path variables must be 
set differently and perhaps pointing to different libraries or packages or so, 
otherwise there is no way it can behave differently when calling virt-manager 
with exactly the same parameters...

Cheers,
Anna
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: pci-stub error and MSI-X for KVM guest

2010-01-04 Thread Fischer, Anna
 Subject: RE: pci-stub error and MSI-X for KVM guest
 
  Subject: Re: pci-stub error and MSI-X for KVM guest
 
  * Fischer, Anna (anna.fisc...@hp.com) wrote:
Subject: Re: pci-stub error and MSI-X for KVM guest
 This works fine in principle and I can see the PCI device in the
 guest under lspci. However, the 82576 VF driver requires the OS
 to support MSI-X. My Fedora installation is configured with MSI-
 X,
 e.g. CONFIG_PCI_MSI is 'y'. When I load the driver it tells me
 it
cannot
 initialize MSI-X for the device, and under /proc/interrupts I
 can
  see
 that MSI-X does not seem to work. Is this a KVM/QEMU limitation?
  It
works
 for me when running the VF driver under a non-virtualized Linux
  system.
   
No, this should work fine.  QEMU/KVM supports MSI-X to guest as
 well
  as
VFs.
  
   Actually, I just got this to work. However, it only works if I call
   qemu-kvm from the command line, while it doesn't work when I start
   the guest via the virt-manager. So this seems to be an issue with
   Fedora's virt-manager rather than with KVM/QEMU. If I call qemu-kvm
   from the command line then I get the pci-stub messages saying 'irq
 xx
   for MSI/MSI-x' when the guest boots up and the VF device works just
  fine
   inside the guest. When I start the guest using virt-manager then I
  don't
   see any of these irq allocation messages from pci-stub. Any idea
 what
   the problem could be here?
 
  No, sounds odd.  Can you:
 
# virsh dumpxml [domain]
 
  and show the output of the hostdev XML section?
 
 hostdev mode='subsystem' type='pci' managed='yes'
   source
 address domain='0x' bus='0x03' slot='0x10' function='0x3'/
   /source
 /hostdev
 
 The device to assign is at :03:10.3, dmesg shows:
 
 pci-stub :03:10.3: enabling device ( - 0002)
 assign device: host bdf = 3:10:3

I forgot, here is the process that the virt-manager GUI creates, e.g. this is 
the one that does not work.

qemu  3072 1  4 11:26 ?00:00:33 /usr/bin/qemu-kvm -S -M pc-0.11 
-m 1024 -smp 1 -name FC10-2 -uuid b811b278-fae2-a3cc-d51d-8f5b078b2477 -monitor 
unix:/var/lib/libvirt/qemu/FC10-2.monitor,server,nowait -boot c -drive 
file=/var/lib/libvirt/images/FC10-2.img,if=virtio,index=0,boot=on -drive 
file=/home/af/Download/Fedora-12-x86_64-Live-KDE.iso,if=ide,media=cdrom,index=2 
-net none -serial pty -parallel none -usb -vnc 127.0.0.1:0 -k en-gb -vga cirrus 
-soundhw es1370 -pcidevice host=03:10.3

Note that this one does work from the command line, but not via the GUI.

For the debugging to work, I need the '-s' option to be added too...

Cheers,
Anna
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Testing nested virtualization

2010-01-04 Thread Daniel Bareiro
Hi, Joerg.

On Monday, 04 January 2010 12:11:46 +0100,
Joerg Roedel wrote:

   Also I was trying to use qemu-kvm-0.12.1.1 with to Linux 2.6.32
   in guest within 'test'. And here it happens something similar.
   Sometimes I get to select the option of the menu of the
   installer, but after boot, the installation is hung again.
 
 The problem has probably to do with the TSC bugs I fixed lately for
 nested SVM. You can try to disable KVM-Clock for the L1 and the L2
 guest or use the latest 2.6.32.x kernel on the Host. Does one of this
 fix the issues for you?

I'm using Linux 2.6.32.2 with qemu-kvm-0.12.1.1 in the host, so I will
try to disable KVM-Clock for L1 and L2 guest. How I can get it?

Alexander said in another mail that AMD family 10 have nested paging
(read quad-core and above) but my processor is dual-core. The problem
can be related to that?

Thanks for your reply.

Regards,
Daniel
-- 
Daniel Bareiro - GNU/Linux registered user #188.598
Proudly running Debian GNU/Linux with uptime:
10:52:11 up 19:37, 11 users,  load average: 0.12, 0.17, 0.11


signature.asc
Description: Digital signature


Re: FreeBSD guest hogs cpu after disk delay?

2010-01-04 Thread Gleb Natapov
On Mon, Jan 04, 2010 at 08:06:15AM -0700, Thomas Fjellstrom wrote:
 On Sun January 3 2010, Thomas Fjellstrom wrote:
  On Sun January 3 2010, Thomas Fjellstrom wrote:
   I have a strange issue, one of my free bsd guests started using up 100%
cpu and wouldnt respond in the console after a md-raid check started
   on the raid1 volume the vm has its lvm volumes on. About the only thing
   I could do was force the vm off, and restart it. In the guests console
   there was some kind of DMA warning/error related to the guest's disk
   saying it would retry, but it seems it never got that far.
  
  I forgot to mention, the host is running debian sid with kernel 2.6.31-1-
  amd64, and kvm --version reports:
  
  QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
  
  the hosts / and the vm volumes all sit on a lvm volume group, ontop of a
   md- raid1 mirror of two Seagate 7200.12 500GB SATAII drives.
  
  The host is running 7 other guest which all seem to be running smoothly,
  except both freebsd (pfSense) based guests which seemed to have locked up
  after this message:
  
  ad0: TIMEOUT - WRITE_DMA retrying (1 retry left) LBA=16183439
  
  Though the second freebsd guest didn't seem to use up nearly as much cpu,
  but it uses up far lower resources than the first.
  
 
 No one have an idea whats going wrong? All of my virtio based linux guests 
 stayed alive. But both of my FreeBSD guests using whatever ide in the -
 drive option sets locked up solid.
 
Can you try more recent version of kvm?

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory usage with qemu-kvm-0.12.1.1

2010-01-04 Thread Thomas Fjellstrom
On Sun January 3 2010, Thomas Fjellstrom wrote:
 On Sun December 27 2009, Avi Kivity wrote:
  On 12/27/2009 07:00 PM, Daniel Bareiro wrote:
   Also, qemu might be leaking memory.  Please post 'pmap $pid' for all
   of your guests (do that before any of the other tests, on your
   swapped-out system).
  
   -
  
 total   626376K
  
 total   626472K
  
 total   626396K
  
 total   635292K
  
 total   625388K
 
  These all seem sane.  So it's a swap regression, hopefully
  2.6.32.something will have a fix.
 
 Sorry to butt in, but heres something I've found odd:
 
 # ps aux | grep /usr/bin/kvm | grep -v grep | cut -f6 -d' ' | xargs -n 1
  -i{} pmap {} | grep total total   845928K
  total   450336K
  total   441968K
  total   440740K
  total   845848K
  total   465808K
 
 root 10466  2.6  6.2 845924 253804 ?   Sl2009 2084:29
  /usr/bin/kvm -S -M pc -m 512 -smp 1 -name awiki -uuid
  330abdce-f657-e0e2-196b-5bf22c0e76f0 -monitor
  unix:/var/lib/libvirt/qemu/awiki.monitor,server,nowait -boot c -drive
  file=/dev/vg0/awiki-root,if=virtio,index=0,boot=on -drive
  file=/dev/vg0/awiki-swap,if=virtio,index=1 -drive
  file=/mnt/boris/data/pub/diskimage/debian-503-amd64-netinst.iso,if=ide,m
 edia=cdrom,index=2,format= -net
  nic,macaddr=52:54:00:35:8b:fb,vlan=0,model=virtio,name=virtio.0 -net
  tap,fd=19,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc
  127.0.0.1:2 -k en-us -vga vmware root 13953  0.2  1.3 450332 54832 ?
 Sl2009 167:25 /usr/bin/kvm -S -M pc -m 128 -smp 1 -name nginx
  -uuid 793160c1-5800-72cf-7b66-8484f931d396 -monitor
  unix:/var/lib/libvirt/qemu/nginx.monitor,server,nowait -boot c -drive
  file=/dev/vg0/nginx,if=virtio,index=0,boot=on -net
  nic,macaddr=52:54:00:06:49:d5,vlan=0,model=virtio,name=virtio.0 -net
  tap,fd=21,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc
  127.0.0.1:3 -k en-us -vga vmware root 14051 31.4  6.7 441964 273132
  ?   Rl   01:19  30:35 /usr/bin/kvm -S -M pc -m 256 -smp 1 -name
  pfsense -uuid 0af4dfac-70f1-c348-9ce5-0df18e9bdc2c -monitor
  unix:/var/lib/libvirt/qemu/pfsense.monitor,server,nowait -boot c -drive
  file=/dev/vg0/pfsense,if=ide,index=0,boot=on -net
  nic,macaddr=00:19:5b:86:3e:fb,vlan=0,model=e1000,name=e1000.0 -net
  tap,fd=22,vlan=0,name=tap.0 -net
  nic,macaddr=52:54:00:53:62:b9,vlan=1,model=e1000,name=e1000.1 -net
  tap,fd=28,vlan=1,name=tap.1 -serial pty -parallel none -usb -vnc
  0.0.0.0:0 -k en-us -vga vmware root 15528 19.7  6.6 440736 270484 ? 
   Sl   01:37  15:38 /usr/bin/kvm -S -M pc -m 256 -smp 1 -name
  pfsense2 -uuid 2c4000a0-7565-b12d-1e2a-1e77cdb778d3 -monitor
  unix:/var/lib/libvirt/qemu/pfsense2.monitor,server,nowait -boot c -drive
  file=/dev/vg0/pfsense2,if=ide,index=0,boot=on -drive
  file=/mnt/boris/data/pub/diskimage/pfSense-1.2.2-LiveCD-Installer.iso,if
 =ide,media=cdrom,index=2,format= -net
  nic,macaddr=52:54:00:38:fc:a7,vlan=0,model=e1000,name=e1000.0 -net
  tap,fd=28,vlan=0,name=tap.0 -net
  nic,macaddr=00:24:1d:18:f8:f6,vlan=1,model=e1000,name=e1000.1 -net
  tap,fd=29,vlan=1,name=tap.1 -serial pty -parallel none -usb -vnc
  127.0.0.1:1 -k en-us -vga vmware root 27079  0.9  0.7 845700 30768 ?
 SLl   2009 584:28 /usr/bin/kvm -S -M pc -m 512 -smp 1 -name
  asterisk -uuid a87d8fc1-ea90-0db4-d6fe-c04e8f2175e7 -monitor
  unix:/var/lib/libvirt/qemu/asterisk.monitor,server,nowait -boot c -drive
  file=/dev/vg0/asterisk,if=virtio,index=0,boot=on -net
  nic,macaddr=52:54:00:68:db:fc,vlan=0,model=virtio,name=virtio.0 -net
  tap,fd=23,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc
  127.0.0.1:5 -k en-us -vga vmware -soundhw es1370 root 31214  0.6 
  2.9 465804 121476 ?   Sl2009 207:08 /usr/bin/kvm -S -M pc -m 256
  -smp 1 -name svn -uuid 6e30e0be-1781-7a68-fa5d-d3c69787e705 -monitor
  unix:/var/lib/libvirt/qemu/svn.monitor,server,nowait -boot c -drive
  file=/dev/vg0/svn-root,if=virtio,index=0,boot=on -net
  nic,macaddr=52:54:00:7d:f4:0b,vlan=0,model=virtio,name=virtio.0 -net
  tap,fd=27,vlan=0,name=tap.0 -serial pty -parallel none -usb -vnc
  0.0.0.0:4 -k en-us -vga vmware
 
 several of these vms are actually assigned less memory than is stated in
  -m, since I used the virt-manager interface to shrink memory size. awiki
  is set to 256MB, yet is still somehow using over 800MB of virt? one of
  the anon maps in pmap shows up as nearly 512MB (544788K). The rest of
  the vms show oddities like that as well.
 
 host is debian sid with the 2.6.31-2-amd64 kernel, kvm --version reports:
 
 QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
 
 and just for kicks:
 
 r...@boris:~# free -m
  total   used   free sharedbuffers cached
 Mem:  3964   3891 72  0108   1686
 -/+ buffers/cache:   2096   1867
 Swap:   

Re: pci-stub error and MSI-X for KVM guest

2010-01-04 Thread Chris Wright
* Fischer, Anna (anna.fisc...@hp.com) wrote:
  Ouch.  Can you do debuginfo-install qemu-system-x86 to get the debug
  packages, then attach gdb to the QEMU process so that when you do lspci
  -v
  in the guest (assuming this is QEMU segfaulting) you'll get a backtrace?
 
 I don't know how I can tell virt-manager through the GUI to enable debug 
 mode, e.g. call virt-manager with '-s'. From the command line I can attach 
 gdb like this, but when running virt-manager from the GUI then I cannot 
 connect to localhost:1234. However, the issues only arise when starting 
 virt-manager from the GUI. I can't find the configuration option to somehow 
 tell that I want it to be launched with '-s'?

Just looking for a backtrace of the qemu-kvm process itself.  So after
you launch it via virt-manager, gdb /usr/bin/qemu-kvm $(pidof qemu-kvm)
should be sufficient.

thanks,
-chris
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ANNOUNCE] qemu-kvm-0.12.1.1

2010-01-04 Thread Gerd Hoffmann

On 12/23/09 12:37, Avi Kivity wrote:

On 12/23/2009 12:58 PM, Thomas Treutner wrote:

On Wednesday 23 December 2009 11:24:04 Avi Kivity wrote:

Please post a full log, after 'make clean'.

http://pastebin.com/f404c8648



Oh, I missed it at first - looks like libxenguest and libxenctrl conflict.


Indeed, there are (un)lock_pages functions in both libraries.  It is 
fixed in xen 3.3+, where libxenguest doesn't has these functions any more.


/me also wonders why Debian seems to have only static xen libraries.
I think when linking against the shared libraries avoids this too as the 
functions are supposed to be library-internal.



Copying Gerd for an opinion.


I think there isn't much we can do about this, it is clearly a xen bug.

Uhm, well, while thinking about it:  The test app compiled and linked by 
configure should have failed in a simliar way, thereby automatically 
disabling xen support.  I have no idea why it didn't ...


cheers,
  Gerd
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: FreeBSD guest hogs cpu after disk delay?

2010-01-04 Thread Thomas Fjellstrom
On Mon January 4 2010, Gleb Natapov wrote:
 On Mon, Jan 04, 2010 at 08:06:15AM -0700, Thomas Fjellstrom wrote:
  On Sun January 3 2010, Thomas Fjellstrom wrote:
   On Sun January 3 2010, Thomas Fjellstrom wrote:
I have a strange issue, one of my free bsd guests started using up
100% cpu and wouldnt respond in the console after a md-raid check
started on the raid1 volume the vm has its lvm volumes on. About
the only thing I could do was force the vm off, and restart it. In
the guests console there was some kind of DMA warning/error related
to the guest's disk saying it would retry, but it seems it never
got that far.
  
   I forgot to mention, the host is running debian sid with kernel
   2.6.31-1- amd64, and kvm --version reports:
  
   QEMU PC emulator version 0.10.50 (qemu-kvm-devel-88)
  
   the hosts / and the vm volumes all sit on a lvm volume group, ontop
   of a md- raid1 mirror of two Seagate 7200.12 500GB SATAII drives.
  
   The host is running 7 other guest which all seem to be running
   smoothly, except both freebsd (pfSense) based guests which seemed to
   have locked up after this message:
  
   ad0: TIMEOUT - WRITE_DMA retrying (1 retry left) LBA=16183439
  
   Though the second freebsd guest didn't seem to use up nearly as much
   cpu, but it uses up far lower resources than the first.
 
  No one have an idea whats going wrong? All of my virtio based linux
  guests stayed alive. But both of my FreeBSD guests using whatever ide
  in the - drive option sets locked up solid.
 
 Can you try more recent version of kvm?

This is a production machine that I'd really like not to have to reboot, 
or stop the vms in any way. But qemu-kvm-0.11.0 seems to exist in apt now, 
so I might upgrade soonish (the kvm package seems to have been removed from 
sid, so it hasn't been upgrading).

Also grub2 seems to be having issues on it, so I'm afraid to reboot at all. 
All of a sudden last month it started to refuse to update itself properly. 
who knows if the box even boots at this point ::)

 --
   Gleb.
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Thomas Fjellstrom
tfjellst...@shaw.ca
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Testing nested virtualization

2010-01-04 Thread Daniel Bareiro
Hi, Alex.

On Monday, 04 January 2010 16:07:52 +0100,
Alexander Graf wrote:

  Also I was trying to use qemu-kvm-0.12.1.1 with to Linux
  2.6.32 in guest within 'test'. And here it happens something
  similar.  Sometimes I get to select the option of the menu of
  the installer, but after boot, the installation is hung again.

  The problem has probably to do with the TSC bugs I fixed lately for
  nested SVM. You can try to disable KVM-Clock for the L1 and the L2
  guest or use the latest 2.6.32.x kernel on the Host. Does one of
  this fix the issues for you?

  I'm using Linux 2.6.32.2 with qemu-kvm-0.12.1.1 in the host, so I
  will try to disable KVM-Clock for L1 and L2 guest. How I can get it?
  
  Alexander said in another mail that AMD family 10 have nested paging
  (read quad-core and above) but my processor is dual-core. The
  problem can be related to that?

 It just means you're using code paths that I usually don't use, since
 my main development machine can do nested paging. It should work
 nevertheless, just be horribly slow.

Then what it would seem it hangs can in fact be a horribly slow
operation?

What it draws attention to me is that, as I said in another mail, this
happens at different moments. Can this be related to TSC bug mentioned
by Joerg? The 'Clocksource tsc unstable' in dmesg is due to this bug? 

In 'test' host I've the following thing:

test:~# cat /sys/devices/system/clocksource/clocksource0/current_clocksource
kvm-clock

test:~# cat /sys/devices/system/clocksource/clocksource0/available_clocksource
kvm-clock hpet acpi_pm


Thanks for your reply.

Regards,
Daniel
-- 
Daniel Bareiro - GNU/Linux registered user #188.598
Proudly running Debian GNU/Linux with uptime:
12:21:25 up 21:06, 11 users,  load average: 0.08, 0.11, 0.09


signature.asc
Description: Digital signature


Re: Testing nested virtualization

2010-01-04 Thread Joerg Roedel
Hi Daniel,

On Mon, Jan 04, 2010 at 12:52:47PM -0300, Daniel Bareiro wrote:
 test:~# cat /sys/devices/system/clocksource/clocksource0/current_clocksource
 kvm-clock
 
 test:~# cat /sys/devices/system/clocksource/clocksource0/available_clocksource
 kvm-clock hpet acpi_pm

Can you try to boot L1 and L2 guest with the 'no-kvmclock' kernel
parameter? This disables the kvm-clock.

Joerg


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] configure: Correct KVM options in help output

2010-01-04 Thread Pierre Riteau
On 1 déc. 2009, at 14:53, Pierre Riteau wrote:

 Signed-off-by: Pierre Riteau pierre.rit...@irisa.fr
 ---
 configure |8 
 1 files changed, 4 insertions(+), 4 deletions(-)
 
 diff --git a/configure b/configure
 index 376c458..85f7b5e 100755
 --- a/configure
 +++ b/configure
 @@ -723,10 +723,10 @@ echo   --disable-bluez  disable bluez stack 
 connectivity
 echo   --enable-bluez   enable bluez stack connectivity
 echo   --disable-kvmdisable KVM acceleration support
 echo   --enable-kvm enable KVM acceleration support
 -echo   --disable-cap-kvm-pitdisable KVM pit support
 -echo   --enable-cap-kvm-pit enable KVM pit support
 -echo   --disable-cap-device-assignmentdisable KVM device assignment 
 support
 -echo   --enable-cap-device-assignment enable KVM device assignment 
 support
 +echo   --disable-kvm-cap-pitdisable KVM pit support
 +echo   --enable-kvm-cap-pit enable KVM pit support
 +echo   --disable-kvm-cap-device-assignmentdisable KVM device assignment 
 support
 +echo   --enable-kvm-cap-device-assignment enable KVM device assignment 
 support
 echo   --disable-nptl   disable usermode NPTL support
 echo   --enable-nptlenable usermode NPTL support
 echo   --enable-system  enable all system emulation targets
 -- 
 1.6.5
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


I think this patch is still valid...

-- 
Pierre Riteau -- http://perso.univ-rennes1.fr/pierre.riteau/

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory usage with qemu-kvm-0.12.1.1

2010-01-04 Thread David S. Ahern


On 01/04/2010 08:12 AM, Thomas Fjellstrom wrote:
 Would this be normal for my setup? The virt usage seems abnormally high for 
 all of my guests, especially the ones using over 800MB virt.
 

As I understand it virtual memory usage shows the allocated address
ranges (library mappings, dynamic allocations, etc). The guest memory
will be one of the anonymous mappings - with size equal to the memory
allocated to the VM. Until the guest accesses all of its memory (or if
qemu initialized it after the malloc), even that memory is only a
notional allocation. This is standard memory usage for linux -- mallocs
only create address mappings/allocations; it is not backed with physical
RAM until accessed.

As an example I have a linux guest with 512MB of RAM. The VmSize at
startup is 892700kB, though RSS is only 59792kB. If I login to the guest
and make use of memory within it then the guest memory becomes backed
from the host side. e.g., I have a memuser program that does nothing
more than malloc memory and initialize it. For the 512M guest, I run
this program with an input arg of 512M and voila the RSS for the qemu
process jumps to 527444kB while the VmSize has not changed.

David
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] virtio_net: Defer skb allocation in receive path

2010-01-04 Thread Shirley Ma
Hello Amit,

Sorry for late response. I am just back from vacation.

On Thu, 2009-12-24 at 19:07 +0530, Amit Shah wrote:
  +static void free_unused_bufs(struct virtnet_info *vi)
  +{
  + void *buf;
  + while (vi-num) {
  + buf = vi-rvq-vq_ops-detach_unused_buf(vi-rvq);
  + if (!buf)
  + continue;
 
 Do you mean 'break' here?

Nope, it means break since the buffer usage is not sorted by descriptors
from my understanding. It breaks when vi-num reaches 0.

Thanks
Shirley

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] virtio_net: Defer skb allocation in receive path

2010-01-04 Thread Michael S. Tsirkin
On Mon, Jan 04, 2010 at 01:25:44PM -0800, Shirley Ma wrote:
 Hello Amit,
 
 Sorry for late response. I am just back from vacation.
 
 On Thu, 2009-12-24 at 19:07 +0530, Amit Shah wrote:
   +static void free_unused_bufs(struct virtnet_info *vi)
   +{
   + void *buf;
   + while (vi-num) {
   + buf = vi-rvq-vq_ops-detach_unused_buf(vi-rvq);
   + if (!buf)
   + continue;
  
  Do you mean 'break' here?
 
 Nope, it means break since the buffer usage is not sorted by descriptors
 from my understanding. It breaks when vi-num reaches 0.
 
 Thanks
 Shirley
t

I don't understand.
detach_unused_buf has:
+   if (!vq-data[i])
+   continue;

so it will never return NULL unless no more buffers?  breaking here ad
BUG_ON(vi-num) as Amit suggests seems cleaner than looping forever if
there's a bug.

-- 
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] PPC: Fix typo in rebolting code

2010-01-04 Thread Alexander Graf
When we're loading bolted entries into the SLB again, we're checking if an
entry is in use and only slbmte it when it is.

Unfortunately, the check always goes to the skip label of the first entry,
resulting in an endless loop when it actually gets triggered.

Signed-off-by: Alexander Graf ag...@suse.de
---
 arch/powerpc/kvm/book3s_64_slb.S |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index ecd237a..8e44788 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -31,7 +31,7 @@
 #define REBOLT_SLB_ENTRY(num) \
ld  r10, SHADOW_SLB_ESID(num)(r11); \
cmpdi   r10, 0; \
-   beq slb_exit_skip_1; \
+   beq slb_exit_skip_ ## num; \
orisr10, r10, slb_esi...@h; \
ld  r9, SHADOW_SLB_VSID(num)(r11); \
slbmte  r9, r10; \
-- 
1.6.0.2

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] PPC: Enable lightweight exits again

2010-01-04 Thread Alexander Graf
The PowerPC C ABI defines that registers r14-r31 need to be preserved across
function calls. Since our exit handler is written in C, we can make use of that
and don't need to reload r14-r31 on every entry/exit cycle.

This technique is also used in the BookE code and is called lightweight exits
there. To follow the tradition, it's called the same in Book3S.

So far this optimization was disabled though, as the code didn't do what it was
expected to do, but failed to work.

This patch fixes and enables lightweight exits again.

Signed-off-by: Alexander Graf ag...@suse.de
---
 arch/powerpc/kvm/book3s.c   |4 +-
 arch/powerpc/kvm/book3s_64_interrupts.S |  106 ---
 2 files changed, 57 insertions(+), 53 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 492dcc1..fd2a4d5 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -539,8 +539,6 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
r = kvmppc_emulate_mmio(run, vcpu);
if ( r == RESUME_HOST_NV )
r = RESUME_HOST;
-   if ( r == RESUME_GUEST_NV )
-   r = RESUME_GUEST;
}
 
return r;
@@ -645,7 +643,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu 
*vcpu,
er = kvmppc_emulate_instruction(run, vcpu);
switch (er) {
case EMULATE_DONE:
-   r = RESUME_GUEST;
+   r = RESUME_GUEST_NV;
break;
case EMULATE_FAIL:
printk(KERN_CRIT %s: emulation at %lx failed (%08x)\n,
diff --git a/arch/powerpc/kvm/book3s_64_interrupts.S 
b/arch/powerpc/kvm/book3s_64_interrupts.S
index 7b55d80..d95d0d9 100644
--- a/arch/powerpc/kvm/book3s_64_interrupts.S
+++ b/arch/powerpc/kvm/book3s_64_interrupts.S
@@ -40,6 +40,26 @@
mtmsrd  r0,1
 .endm
 
+#define VCPU_LOAD_NVGPRS(vcpu) \
+   ld  r14, VCPU_GPR(r14)(vcpu); \
+   ld  r15, VCPU_GPR(r15)(vcpu); \
+   ld  r16, VCPU_GPR(r16)(vcpu); \
+   ld  r17, VCPU_GPR(r17)(vcpu); \
+   ld  r18, VCPU_GPR(r18)(vcpu); \
+   ld  r19, VCPU_GPR(r19)(vcpu); \
+   ld  r20, VCPU_GPR(r20)(vcpu); \
+   ld  r21, VCPU_GPR(r21)(vcpu); \
+   ld  r22, VCPU_GPR(r22)(vcpu); \
+   ld  r23, VCPU_GPR(r23)(vcpu); \
+   ld  r24, VCPU_GPR(r24)(vcpu); \
+   ld  r25, VCPU_GPR(r25)(vcpu); \
+   ld  r26, VCPU_GPR(r26)(vcpu); \
+   ld  r27, VCPU_GPR(r27)(vcpu); \
+   ld  r28, VCPU_GPR(r28)(vcpu); \
+   ld  r29, VCPU_GPR(r29)(vcpu); \
+   ld  r30, VCPU_GPR(r30)(vcpu); \
+   ld  r31, VCPU_GPR(r31)(vcpu); \
+
 /*
  *   *
  * Guest entry / exit code that is in kernel module memory (highmem) *
@@ -67,12 +87,16 @@ kvm_start_entry:
SAVE_NVGPRS(r1)
 
/* Save LR */
-   mflrr14
-   std r14, _LINK(r1)
+   std r0, _LINK(r1)
+
+   /* Load non-volatile guest state from the vcpu */
+   VCPU_LOAD_NVGPRS(r4)
 
-/* XXX optimize non-volatile loading away */
 kvm_start_lightweight:
 
+   ld  r9, VCPU_PC(r4) /* r9 = vcpu-arch.pc */
+   ld  r10, VCPU_SHADOW_MSR(r4)/* r10 = vcpu-arch.shadow_msr 
*/
+
DISABLE_INTERRUPTS
 
/* Save R1/R2 in the PACA */
@@ -81,29 +105,6 @@ kvm_start_lightweight:
ld  r3, VCPU_HIGHMEM_HANDLER(r4)
std r3, PACASAVEDMSR(r13)
 
-   /* Load non-volatile guest state from the vcpu */
-   ld  r14, VCPU_GPR(r14)(r4)
-   ld  r15, VCPU_GPR(r15)(r4)
-   ld  r16, VCPU_GPR(r16)(r4)
-   ld  r17, VCPU_GPR(r17)(r4)
-   ld  r18, VCPU_GPR(r18)(r4)
-   ld  r19, VCPU_GPR(r19)(r4)
-   ld  r20, VCPU_GPR(r20)(r4)
-   ld  r21, VCPU_GPR(r21)(r4)
-   ld  r22, VCPU_GPR(r22)(r4)
-   ld  r23, VCPU_GPR(r23)(r4)
-   ld  r24, VCPU_GPR(r24)(r4)
-   ld  r25, VCPU_GPR(r25)(r4)
-   ld  r26, VCPU_GPR(r26)(r4)
-   ld  r27, VCPU_GPR(r27)(r4)
-   ld  r28, VCPU_GPR(r28)(r4)
-   ld  r29, VCPU_GPR(r29)(r4)
-   ld  r30, VCPU_GPR(r30)(r4)
-   ld  r31, VCPU_GPR(r31)(r4)
-
-   ld  r9, VCPU_PC(r4) /* r9 = vcpu-arch.pc */
-   ld  r10, VCPU_SHADOW_MSR(r4)/* r10 = vcpu-arch.shadow_msr 
*/
-
ld  r3, VCPU_TRAMPOLINE_ENTER(r4)
mtsrr0  r3
 
@@ -247,7 +248,6 @@ kvmppc_handler_highmem:
 
 no_dcbz32_off:
 
-   /* XXX maybe skip on lightweight? */
std r14, VCPU_GPR(r14)(r12)
std r15, VCPU_GPR(r15)(r12)
std r16, VCPU_GPR(r16)(r12)
@@ -267,9 +267,6 @@ no_dcbz32_off:
std 

Re: Testing nested virtualization

2010-01-04 Thread Daniel Bareiro
On Monday, 04 January 2010 17:04:16 +0100,
Joerg Roedel wrote:

 Hi Daniel,

Hi, Joerg.

  test:~# cat /sys/devices/system/clocksource/clocksource0/current_clocksource
  kvm-clock
  
  test:~# cat 
  /sys/devices/system/clocksource/clocksource0/available_clocksource
  kvm-clock hpet acpi_pm
 
 Can you try to boot L1 and L2 guest with the 'no-kvmclock' kernel
 parameter? This disables the kvm-clock.

Question: what is L1 and L2? Because it sounds to cache levels, but I
suppose that against this background one will talk about another thing.

Beyond that, using no-kvmclock kernel parameter en 'test', now I have
the following thing:

test:~# cat /sys/devices/system/clocksource/clocksource0/current_clocksource
hpet

test:~# cat /sys/devices/system/clocksource/clocksource0/available_clocksource
hpet acpi_pm


But the same happens. The Debian installer hangs in the initial screen.

Thanks for your reply.

Regards,
Daniel
-- 
Daniel Bareiro - GNU/Linux registered user #188.598
Proudly running Debian GNU/Linux with uptime:
20:43:21 up 1 day,  5:28, 11 users,  load average: 0.21, 0.20, 0.12


signature.asc
Description: Digital signature


Re: [Autotest] [Autotest PATCH] KVM-test: Add a subtest image_copy

2010-01-04 Thread Yolkfull Chow
On Mon, Jan 04, 2010 at 10:52:13PM +0800, Amos Kong wrote:
 On Mon, Jan 04, 2010 at 05:30:21PM +0800, Yolkfull Chow wrote:
  Add image_copy subtest for convenient KVM functional testing.
  
  The target image will be copied into the linked directory if link 'images'
  is created, and copied to the directory specified in config file otherwise.
  
  Signed-off-by: Yolkfull Chow yz...@redhat.com
  ---
   client/tests/kvm/kvm_utils.py  |   64 
  
   client/tests/kvm/tests/image_copy.py   |   42 +
   client/tests/kvm/tests_base.cfg.sample |6 +++
   3 files changed, 112 insertions(+), 0 deletions(-)
   create mode 100644 client/tests/kvm/tests/image_copy.py
  
  diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
  index 2bbbe22..1e11441 100644
  --- a/client/tests/kvm/kvm_utils.py
  +++ b/client/tests/kvm/kvm_utils.py
  @@ -924,3 +924,67 @@ def create_report(report_dir, results_dir):
   reporter = os.path.join(report_dir, 'html_report.py')
   html_file = os.path.join(results_dir, 'results.html')
   os.system('%s -r %s -f %s -R' % (reporter, results_dir, html_file))
  +
  +
  +def is_dir_mounted(source, dest, type, perm):
  +
  +Check whether `source' is mounted on `dest' with right permission.
  +
  +@source: mount source
  +@dest:   mount point
  +@type:   file system type
 
@perm:   mount permission
 
  +
  +match_string = %s %s %s %s % (source, dest, type, perm)
  +try:
  +f = open(/etc/mtab, r)
  +except IOError:
  +pass
 
 When calling open(), if raise an IOError exception, 'f' was not assigned.
 Then we could not call 'f.read()' or 'f.close()'

Ah..yes, thanks for pointing this out.

 
 We need 'return False', not 'pass' 
 
  +mounted = f.read()
  +f.close()
  +if match_string in mounted: 
  +return True
  +return False
  +
  +
  +def umount(mount_point):
  +
  +Umount `mount_point'.
  +
  +@mount_point: mount point
  +
  +cmd = umount %s % mount_point
  +s, o = commands.getstatusoutput(cmd)
  +if s != 0:
  +logging.error(Fail to umount: %s % o)
  +return False
  +return True
  +
  +
  +def mount(src, mount_point, type, perm = rw):
  +
  +Mount the src into mount_point of the host.
  +
  +@src: mount source
  +@mount_point: mount point
  +@type: file system type
  +@perm: mount permission
  +
  +if is_dir_mounted(src, mount_point, type, perm):
  +return True
  +
  +umount(mount_point)
  +
  +cmd = mount -t %s %s %s -o %s % (type, src, mount_point, perm)
  +logging.debug(Issue mount command: %s % cmd)
  +s, o = commands.getstatusoutput(cmd)
  +if s != 0:
  +logging.error(Fail to mount: %s  % o)
  +return False
  +
  +if is_dir_mounted(src, mount_point, type, perm):
  +logging.info(Successfully mounted %s % src)
  +return True
  +else:
  +logging.error(Mount verification failed; currently mounted: %s %
  + file('/etc/mtab').read())
  +return False
  diff --git a/client/tests/kvm/tests/image_copy.py 
  b/client/tests/kvm/tests/image_copy.py
  new file mode 100644
  index 000..800fb90
  --- /dev/null
  +++ b/client/tests/kvm/tests/image_copy.py
  @@ -0,0 +1,42 @@
  +import os, logging, commands
  +from autotest_lib.client.common_lib import error
  +import kvm_utils
  +
  +def run_image_copy(test, params, env):
  +
  +Copy guest images from NFS server.
  +1) Mount the NFS directory
  +2) Check the existence of source image
  +3) If existence copy the image from NFS
  +
  +@param test: kvm test object
  +@param params: Dictionary with the test parameters
  +@param env: Dictionary with test environment.
  +
  +mount_dest_dir = params.get(dst_dir,'/mnt/images')
  +if not os.path.exists(mount_dest_dir):
  +os.mkdir(mount_dest_dir)
  +
  +src_dir = params.get('nfs_images_dir')
  +image_dir = os.path.join(os.environ['AUTODIR'],'tests/kvm/images')
  +if not os.path.exists(image_dir):
  +image_dir = os.path.dirname(params.get(image_name))
  +
  +image = 
  os.path.split(params['image_name'])[1]+'.'+params['image_format']
  +
  +src_path = os.path.join(mount_dest_dir, image)
  +dst_path = os.path.join(image_dir, image)
  +
  +if not kvm_utils.mount(src_dir, mount_dest_dir, nfs, ro):
  +raise error.TestError(Fail to mount the %s to %s %
  +  (src_dir, mount_dest_dir))
  +  
  +# Check the existence of source image
  +if not os.path.exists(src_path):
  +raise error.TestError(Could not found %s in src directory % 
  src_path)
  +
  +logging.info(Copying image '%s'... % image)
  +cmd = cp %s %s % (src_path, dst_path)
  +s, o = 

Re: [PATCH] Add definitions for current cpu models..

2010-01-04 Thread john cooper
Marcelo Tosatti wrote:
 On Mon, Dec 21, 2009 at 01:46:36AM -0500, john cooper wrote:
 +{
 +.name = Opteron_G2,
 +.level = 5,
 +.vendor1 = CPUID_VENDOR_INTEL_1,
 +.vendor2 = CPUID_VENDOR_INTEL_2,
 +.vendor3 = CPUID_VENDOR_INTEL_3,
 
 Silly question: why a CPU named Opteron_G2 uses intel vendor id's?

The feedback I had from AMD indicated using the Intel
strings for a family 15 cpu resulted in the least
amount of guest confusion.  The upstream kvm64 model
does similarly.

Sorry for my late response here which was preempted
by the intervening holiday.

-john

-- 
john.coo...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html