Slackware 7.0 hangs at ifconfig on 0.12.5, worked on 0.11.0.

2010-11-16 Thread Mark van Walraven
Hi,

The subject pretty much says it all, Slackware 7.0, Linux 2.2.13 kernel
and ifconfig 1.39 (net-tools 1.52).  

qemu-kvm 0.12.5 is from Debian Lenny backports, with the backports kernel
(2.6.32).  qemu-kvm 0.11.0 is built from source with the Lenny stable
kernel 2.6.26.  Both hosts have X3350 CPUs.

This works on 0.11.0 under libvirt 0.7.6:

/usr/bin/kvm -S -M pc-0.11 -cpu qemu32 -enable-kvm -m 256 -smp 1 -name 
johnsbox -uuid 8632a9c2-f579-4a8f-b355-26193e337471 -monitor 
unix:/var/lib/libvirt/qemu/johnsbox.monitor,server,nowait -boot a -drive 
file=/dev/mapper/apparel-johnsbox,if=ide,bus=0,unit=0,cache=none -drive 
if=ide,media=cdrom,bus=1,unit=0 -drive 
file=/var/lib/libvirt/images/johnsbox-floppy.img,if=floppy,unit=0 -net 
nic,macaddr=00:0c:29:f2:cf:ff,vlan=0,model=ne2k_pci,name=net0 -net 
tap,fd=30,vlan=0,name=hostnet0 -serial none -parallel none -usb -vnc 
127.0.0.1:0 -vga cirrus -balloon virtio

This hangs on 0.12.5 (and libvirt 0.8.1) in ifconfig while configuring eth0:

/usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 256 -smp 
1,sockets=1,cores=1,threads=1 -name johnsbox -uuid 
815ff1ac-47df-596c-a394-7173d9b4633f -nodefaults -chardev 
socket,id=monitor,path=/var/lib/libvirt/qemu/johnsbox.monitor,server,nowait 
-mon chardev=monitor,mode=readline -rtc base=utc -boot a -drive 
file=/dev/collective/johnsbox,if=none,id=drive-ide0-0-0,format=raw -device 
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive 
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on -device 
ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
file=/var/lib/libvirt/images/johnsbox-floppy.img,if=none,id=drive-fdc0-0-0,readonly=on,format=raw
 -global isa-fdc.driveA=drive-fdc0-0-0 -device 
ne2k_pci,vlan=0,id=net0,mac=00:0c:29:f2:cf:ff,bus=pci.0,addr=0x4 -net 
tap,fd=26,vlan=0,name=hostnet0 -chardev pty,id=serial0 -device 
isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:9 -k en-us -vga std -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3

Same again (trying to get the command line similar to what worked):

/usr/bin/kvm -M pc-0.11 -cpu qemu32 -enable-kvm -m 256 -smp 1 -boot a 
-drive file=/dev/collective/johnsbox,if=ide,bus=0,unit=0,cache=none -drive 
if=ide,media=cdrom,bus=1,unit=0 -drive 
file=/var/lib/libvirt/images/johnsbox-floppy.img,if=floppy,unit=0 -net 
nic,macaddr=00:0c:29:f2:cf:ff,vlan=0,model=ne2k_pci,name=net0 -serial none 
-parallel none -usb -vnc 127.0.0.1:9 -vga cirrus -balloon virtio

Hoping I haven't missed anything obvious ...  Without a virtual NIC, the
guest boots up ok and appears to be fine.  The guest's boot messages shows
eth0 is getting IRQ11 with both 0.11.0 and 0.12.5, however under 0.12.5
eth0 doesn't show up in /proc/interrupts.

Any ideas?  I'm not quite in a position to try 0.13.0, unfortunately.

Thanks,

Mark.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM and kernel 2.6.30 file system madness

2009-07-20 Thread Mark van Walraven
On Wed, Jul 15, 2009 at 02:33:03PM +0530, Amit Shah wrote:
 On (Wed) Jul 15 2009 [09:52:36], Robert Wimmer wrote:
  Hi!
  
   Are you using virtio-block?
  
  Yes.
 
 OK, then there is a known problem. I think the fix is waiting to be
 applied.

Amit, would you kindly state the problem with virtio-block?

Thanks,

Mark.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM on Debian

2009-06-04 Thread Mark van Walraven
On Thu, Jun 04, 2009 at 01:37:54PM -0700, Aaron Clausen wrote:
 I'm running a production Debian Lenny server using KVM to run a couple
 of Windows and a couple of Linux guests.  All is working well, but I
 want to give my Server 2003 guest access to a SCSI tape drive.
 Unfortunately, Debian is pretty conservative, and the version of KVM
 is too old to support this.  Is there a reasonably safe way of
 upgrading to one of the newer versions of KVM on this server?

I'm interested in this too, so far I have found that Lenny's libvirt fails
to parse the output of kvm --help, though this is fixed in the libvirt in
testing.  The kvm package from experimental seems to work well - after a
day of testing.

My next step is to try qemu-kvm, built from source.  The Debianised libvirt
expects the kvm binaries to be in /usr/bin/kvm, so you can symlink them
from /usr/local/bin if you prefer to install there.  I've also experimented
with shell script wrapper in /usr/bin/kvm that condenses the output of
qemu-kvm --help so that libvirtd for Lenny works.

Regards,

Mark.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM on Debian

2009-06-04 Thread Mark van Walraven
Hi,

An update in the hope that this is useful to someone :-)

On Fri, Jun 05, 2009 at 09:03:03AM +1200, Mark van Walraven wrote:
 My next step is to try qemu-kvm, built from source.  The Debianised libvirt
 expects the kvm binaries to be in /usr/bin/kvm, so you can symlink them
 from /usr/local/bin if you prefer to install there.  I've also experimented
 with shell script wrapper in /usr/bin/kvm that condenses the output of
 qemu-kvm --help so that libvirtd for Lenny works.

Actually, the current Debian Lenny libvirt* (0.4.6-10) seem to work
fine with qemu-kvm-0.10.5 built from source.  All I needed to do was
symlink /usr/local/bin/qemu-system-x86_64 to /usr/bin/kvm and copy
extboot.bin into /usr/local/share/qemu/ (I used the one from the
kvm 85+dfsg-3 package in Experimental).

So far, so good.

Mark.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-83 write performance raw

2009-03-04 Thread Mark van Walraven
Hi Paolo,

Sorry, list - getting a bit off-topic but I'll include because it might
be of general interest for kvm users ...

On Wed, Mar 04, 2009 at 11:28:18PM +0100, Paolo Pedaletti wrote:
 ok, I can understand this
 but on a big multimedia-file partition an opportune read-ahead could  
 be useful (to set with blockdev)

Sure.  Adjust and measure for your average and worst-case workload.
I expected a moderate read-ahead to help on the storage serving my kvm
hosts, but in practice it caused painful latency spikes.

 I use LVM extensively so can you explain how can you achieve alignments  
 between lvm and filesistem? and how to check it?

Your links contain good material on this.  My comments are:

When you can, don't use a partition table but make the whole disk a PV.
Otherwise, watch that your partitions are properly aligned.

Use '--metadatasize 250k' arguments with pvcreate (the size is always
rounded up the next 64KB boundary so 250k ends up 256KB, '--metadatasize
256k' would result in 320KB).

'pvs -o+pe_start' and 'dmsetup table' will show your PV and LV offsets.

If you use image files, you probably don't want them to have holes in
them, or they will likey fragment as the holes are filled.  I expect
qcow2 images internally fragment?  Read-ahead on a fragmented image file
will really hurt.

Ext2 doesn't seem very sensitive to alignment.  I haven't played with
aligning ext3's journal.  (Speculation: a deliberately-wrong stride could
be interesting if inode lookups are a seek away from their data block
and your RAID is clever about splitting seeks between mirror drives.)

RAID controllers can have their own sector offsets and read-aheads.

Using disk type='block' avoids the host filesystem overhead altogether.

Regards,

Mark.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-83 write performance raw

2009-03-02 Thread Mark van Walraven
On Mon, Mar 02, 2009 at 03:11:59PM -0500, Malinka Rellikwodahs wrote:
 when running with a raw disk image as a file or a raw disk image on an
 lvm vg, I'm getting very low performance on write (5-10 MB/s) however
 when using qcow2 format disk image the write speed is much better
 (~30MB/s), which is consistant with a very similar setup running
 kvm-68.  Unfortunately when running the test with qcow2 the system
 becomes unresponsive for a brief time during the test.

 The host is running raid5 and drbd (drive replication software),
 however performance on the host is performaning well and avoiding the
 drbd layer in the guest does not improve performance, but running on
 qcow2 does.
 
 Any thoughts/suggestions of what could be wrong or what to do to fix this?

RAID1 has *much* better write performance.  With striping RAIDs, alignment
is important.  RAID controllers sometimes introduce hidden alignment
offsets.  Excessive read-ahead is a waste of time with a lot of small
random I/O, which is what I see mostly with guests on flat disk images.

With LVM, it pays to make sure the LVs are aligned to the disk.  I prefer
boundaries with multiples of at least 64-sectors, which makes the LVM
overhead virtually disappear.  I align the guest filesystems too, when
I can.

I don't think DRBD has an effect on alignment, but you might look at
keeping the metadata on another drive.

Block - rather than file - images are much faster.

Hope this helps,

Mark.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html