[Bug 614322] Re: libvirt not recognizing NUMA architecture

2010-11-07 Thread EAB
I can confirm this.

Most of our hosts have 64GB RAM and 2 Intel Nehalem hexa-cores.
With 3 VM's using 16GB RAM each.
I noticed performance degredation and the host also swapped out a lot of 
memory. The swapping degraded performance dramatically.
vm.swappiness=0 in /etc/sysctl.conf did not help.

It seems that NUMA on Intel CPU's can be expensive because RAM needs to
be transfered from other nodes. With only 1 node (socket) there is no
problem. With 2 or more nodes you see slower.

Without the capabilities you can prevent this behavior by pinning the vcpu's.
You should spread you VM's over the available nodes:

numactl --hardware | grep node 0 cpus
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22

Your XML should contain something like this:
vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'1/vcpu 
vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'2/vcpu 
vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'3/vcpu 
vcpu cpuset='0,2,4,6,8,10,12,14,16,18,20,22'4/vcpu 

The next VM should use they other node.
numactl --hardware | grep node 1 cpus
node 0 cpus: 1 3 5 7 9 11 13 15 17 19 21 23
vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'1/vcpu 
vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'2/vcpu 
vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'3/vcpu 
vcpu cpuset='1,3,5,7,9,11,13,15,17,19,21,23'4/vcpu 

So you don't need the NUMA info from virsh capabilities.

We now split up our hosts by the amount of NUMA-nodes to prevent
performance-degradation and swapping.

-- 
libvirt not recognizing NUMA architecture
https://bugs.launchpad.net/bugs/614322
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 667986] Re: Disk image type defaults to raw since 0.7.5-5ubuntu27.5

2010-11-06 Thread EAB
We now use the type= part.

Something different but also related to this new behavior.
Live migration seems to crash the VM with VM's started before the upgrades. I 
can't reproduce it beceause all hosts are upgraded and I now don't do live 
migrations anymore because of 3 fails out of 3.
The live migration only crashes the VM when we use qcow2 and migrate to another 
host which is also upgraded. The destination host reads the qcow2 as raw 
ouch!
The VM on the destination has the driver type='raw'/ with virsh dumpxml. On 
the source host it was driver type='qcow2'/.

-- 
Disk image type defaults to raw since 0.7.5-5ubuntu27.5
https://bugs.launchpad.net/bugs/667986
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 667986] Re: Disk image type defaults to raw since 0.7.5-5ubuntu27.5

2010-11-06 Thread EAB
My bad.
libvirt-migrate-qemu-disks is unable to migrate my disks. The dumpxml output is 
the same after using libvirt-migrate-qemu-disks.
Thats why a live migration failes.

-- 
Disk image type defaults to raw since 0.7.5-5ubuntu27.5
https://bugs.launchpad.net/bugs/667986
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 667986] [NEW] Disk image type defaults to raw since 0.7.5-5ubuntu27.5

2010-10-28 Thread EAB
Public bug reported:

Ubuntu 10.04.1 LTS
libvirt-bin 0.7.5-5ubuntu27.6

Since 0.7.5-5ubuntu27.5 (http://www.ubuntuupdates.org/packages/show/253540) the 
default type of diskimages is RAW.
Before this version the diskimage type was automatically detected.

This new behavior results in boot-failures and head-ages.
After upgrading libvirt-bin and stopping and starting a VM it looked like the 
qcow2 image was completely unrecoverable. All types of recovery tools were not 
able to recover most of the data, only some snippets. Converting the qcow2 to 
RAW worked and it booted directly. So there was nothing wrong with the qcow2 
image.
When I checked the kvm-process with ps I found type=raw defined for the qcow2 
image.

Snippet form virsh dumpxml someVM before:
disk type='file' device='disk'
  driver name='qemu' cache='writethrough'/
  source file='/etc/libvirt/qemu/disks/somediskimage.qcow2'/
  target dev='vda' bus='virtio'/
/disk

Snippet form virsh dumpxml someVM now:
disk type='file' device='disk'
  driver name='qemu' type='raw' cache='writethrough'/
  source file='/etc/libvirt/qemu/disks/somediskimage.qcow2'/
  target dev='vda' bus='virtio'/
/disk

If a file (qcow2 or raw is used) detecting the type of image is very easy:
r...@kvm:# file disk0.qcow2 
disk0.qcow2: Qemu Image, Format: Qcow , Version: 2
r...@kvm:# qemu-img info disk0.qcow2 
image: disk0.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 8.0G
cluster_size: 4096

Maybe it's better to try to detect the type first, and if that fails use
RAW as default (for blockdevices).

** Affects: libvirt (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: libvirt-bin qcow2 raw virsh

-- 
Disk image type defaults to raw since 0.7.5-5ubuntu27.5
https://bugs.launchpad.net/bugs/667986
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 667986] Re: Disk image type defaults to raw since 0.7.5-5ubuntu27.5

2010-10-28 Thread EAB
It seems to be I missed that security notice. ;)

Without the security notice it is impossible to understand what changed by 
reading only the changelog.
There is no link to the security notice in the changelog and the available 
links in the changelog don't contain any words about changed default behavior.
So for future upgrades I simply have to search in the security notices to get 
know of any changes to?

-- 
Disk image type defaults to raw since 0.7.5-5ubuntu27.5
https://bugs.launchpad.net/bugs/667986
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 556732] Re: blkid not used in fstab

2010-04-07 Thread EAB
Does it?
The UUID's are known from the beginning while the disks are partitioned.

-- 
blkid not used in fstab
https://bugs.launchpad.net/bugs/556732
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to vm-builder in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 556732] Re: blkid not used in fstab

2010-04-07 Thread EAB
Snippet from menu.list:
title   Ubuntu 9.10, kernel 2.6.31-20-server
uuid618c7d1f-ef8c-423b-9f32-e8731d15daf2
kernel  /boot/vmlinuz-2.6.31-20-server 
root=UUID=618c7d1f-ef8c-423b-9f32-e8731d15daf2 ro quiet splash 
initrd  /boot/initrd.img-2.6.31-20-server

/etc/fstab (after I fixed it):
UUID=618c7d1f-ef8c-423b-9f32-e8731d15daf2   
/   xfs defaults0   0
UUID=409917ac-9244-4ae1-a5f8-d54b3b1665c6   
swapswapdefaults0   0

The goal is to prevent issues when adding disks. When a new disk is
added and recognized before the current disk, the root and swap
partitions are not there (on the new added disk). /dev/sda is the new
disk and /dev/sdb1 is root and /dev/sdb2 is swap in this new situation.

UUIDs in fstab are used by default since 8.04.

I only asking to put this on a whislist.
plugins/ubuntu/dapper.py line 260 self.install_from_template('/etc/fstab', 
'dapper_fstab', { 'parts' : disk.get_ordered_partitions(self.vm.disks), 
'prefix' : self.disk_prefix })
And the template should use UUIDs.

-- 
blkid not used in fstab
https://bugs.launchpad.net/bugs/556732
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to vm-builder in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 556732] [NEW] blkid not used in fstab

2010-04-06 Thread EAB
Public bug reported:

vmbuilder uses blkid in /boot/grub/menu.lst
It should use it to in /etc/fstab

Now it's needed to rewrite (automated) /etc/fstab with UUID's to prevent
issues when adding disks.

** Affects: vm-builder (Ubuntu)
 Importance: Undecided
 Status: New

-- 
blkid not used in fstab
https://bugs.launchpad.net/bugs/556732
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to vm-builder in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 555115] [NEW] Can't use XFS as filesystem

2010-04-04 Thread EAB
Public bug reported:

Ubuntu 9.10 Karmic

It's impossible to use XFS as filesystem.

Seems to be that EXT3 is hardcoded in:
/usr/lib/python2.6/dist-packages/VMBuilder/plugins/cli/__init__.py
It's not possible to use XFS without changing this file, which is bad. (sed 
's/ext3/xfs/g' 
/usr/lib/python2.6/dist-packages/VMBuilder/plugins/cli/__init__.py)

Option: --rootsize=SIZE only accepts the size of the root filesystem
Option: --part=PATH only accepts mountpoints and size

** Affects: vm-builder (Ubuntu)
 Importance: Undecided
 Status: New

-- 
Can't use XFS as filesystem
https://bugs.launchpad.net/bugs/555115
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to vm-builder in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 448674] Re: VM is suspended after live migrate in Karmic

2010-04-02 Thread EAB
Mark , I fully agree.
It should be fixed in Karmic.

I'm not going to use Lucid in production for the next 3-4 months.
It has to prove to be stable first.

Is it so hard to fix this bug? Probably it's not high on the list to be
fixed.

-- 
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 448674] Re: VM is suspended after live migrate in Karmic

2010-04-01 Thread EAB
In Karmic there is a workaround.
In Lucid this problem is not reproducable.

I also think it don't meet the SRU.

6 months ago when I reported this bug I hoped the bug to be fixed within a 
couple weeks, but now I rather wait a month and test al needed features again 
and again in Lucid (doing it already for some months).
The migrate feature in Karmic works 9 out of 10 times with the workarround (the 
failures are with random errors).
In Lucid I migrated 6 VM's hundreds of times without failures. So 
KVM/Qemu/Libvirt is much more stable in Lucid.

Why I also like to upgrade to Lucid? Because Lucid is LTS and features like KSM:
http://www.linux-kvm.com/content/using-ksm-kernel-samepage-merging-kvm

It's absolutely worth waiting now.

-- 
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 448674] Re: VM is suspended after live migrate in Karmic

2010-03-02 Thread EAB
Migrating from Karmic - Karmic seems to work for some time now.
This bug can be closed

-- 
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 448674] Re: VM is suspended after live migrate in Karmic

2010-01-22 Thread EAB
Migrating between these CPU types:
Testserver01:  2 X Intel(R) Core(TM)2 CPU  6300  @ 1.86GHz
Productionserver01: 16 X Intel(R) Xeon(R) CPU   X5570  @ 2.93GHz
Works for me with:
Karmic - Lucid
Lucid - Lucid

This was on 2010-01-13.
Now Migration to Lucid fails from Karmic. Lot of updates on Lucid on KVM/QEMU 
last days.

-- 
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 448674] Re: VM is suspended after live migrate in Karmic

2010-01-13 Thread EAB
Finished some new tests.

Test is prety much the same as the bug description and comment 2
(https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/448674/comments/2)
only hostB is Lucid.

Brought HostA up-to-date:
Ubuntu Karmic 9.10
libvirt-bin 0.7.0-1ubuntu13/.1
qemu-kvm 0.11.0-0ubuntu6.3
2.6.31-16-server

Upgraded HostB to:
Ubuntu Lucid 10.04 (development branch)
libvirt-bin 0.7.2-4ubuntu5
qemu-kvm 0.11.0-0ubuntu6.3
2.6.32-10-server

VM running Ubuntu Jaunty 9.04

- Karmic - Lucid : Migration works without suspend/resume workaround.

- Lucid - Lucid : Migration works without suspend/resume workaround.

For fun:
- Lucid - Karmic (so back) : Migration works but suspend/resume workaround 
needed. Instance is migrated but all partitions are gone so I/O errors and 
everything crashes ;)

-- 
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 448674] Re: VM is suspended after live migrate in Karmic

2009-11-28 Thread EAB
I tested migrations on Karmic with guests OS Ubuntu Hardy, Ubuntu Jaunty, 
Ubuntu Karmic
Guests hangs and suspend+resume fixes this.

-- 
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 448674] Re: VM is suspended after live migrate in Karmic

2009-11-02 Thread EAB
Seems to be a known issue and patches are available:
https://www.redhat.com/archives/libvir-list/2009-October/msg00019.html

-- 
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 448674] Re: VM is suspended after live migrate in Karmic

2009-10-20 Thread EAB
Hosts:
CPU: Intel(R) Core(TM)2 CPU  6300  @ 1.86GHz
RAM: 2GB
Disk: Gbit NFS-mount on NetApp FAS3040 (/etc/libvirt/qemu)
10.0.40.100:/vol/hl/disk_images /etc/libvirt/qemu/disks nfs 
rsize=32768,wsize=32768,hard,intr,tcp,timeo=600,rw0   0

Installed both hosts with Ubuntu Jaunty 9.04.
aptitude install libvirt-bin qemu kvm host sysstat iptraf iptables portmap 
nfs-common realpath bridge-utils vlan ubuntu-virt-server python-vm-builder 
whois postfix hdparm

After some testing with migration (all failed because of several
errors/bugs) I upgraded to Ubuntu Karmic 9.10 Beta.

cat /etc/network/interfaces:
auto lo
iface lo inet loopback

auto eth1
iface eth1 inet manual
up ifconfig eth1 0.0.0.0 up
up ip link set eth1 promisc on

auto eth1.1503
iface eth1.1503 inet manual
up ifconfig eth1.1503 0.0.0.0 up
up ip link set eth1.1503 promisc on

auto br_extern
iface br_extern inet static
address 123.123.32.252 # HOSTA
address 123.123.32.253 # HOSTB
network 123.123.32.0
netmask 255.255.252.0
broadcast 123.123.35.255
gateway 123.123.32.1
bridge_ports eth0.1503
bridge_stp off


/etc/resolv.conf is correct
/etc/hosts is correct
Hostnames are correct and resolvable

VM running Ubuntu Jaunty 9.04:
fqdn.com.xml:
?xml version=1.0?
domain type=kvm
  namefqdn.com/name
  uuid70a1c1f2-9a3e-4ee5-9f95-69e7e2682e15/uuid
  memory1048576/memory
  currentMemory1048576/currentMemory
  vcpu1/vcpu
  features
acpi/
apic/
pae/
  /features
  os
typehvm/type
boot dev=cdrom/
boot dev=hd/
  /os
  clock offset=utc/
  on_poweroffdestroy/on_poweroff
  on_rebootrestart/on_reboot
  on_crashrestart/on_crash
  devices
emulator/usr/bin/kvm/emulator
disk type=file device=disk
  source file=/etc/libvirt/qemu/disks/1378/fqdn.com/disk0.qcow2/
  target dev=hda bus=ide/
  driver cache=writethrough/
/disk
interface type=bridge
  mac address=56:16:43:76:ab:09/
  source bridge=br_extern/
/interface
disk type=file device=cdrom
  target dev=hdc bus=ide/
  readonly/
/disk
input type=mouse bus=ps2/
graphics type=vnc port=-1 listen=127.0.0.1/
  /devices
/domain

Define instance:
/usr/bin/virsh define /etc/libvirt/qemu/xml/1378/fqdn.com.xml

Start instance:
/usr/bin/virsh start fqdn.com

ps auxf | grep kvm:
/usr/bin/kvm -S -M pc-0.11 -m 1024 -smp 1 -name fqdn.com -uuid 
70a1c1f2-9a3e-4ee5-9f95-69e7e2682e15 -monitor 
unix:/var/run/libvirt/qemu/fqdn.com.monitor,server,nowait -boot dc -
drive 
file=/etc/libvirt/qemu/disks/1378/fqdn.com/disk0.qcow2,if=ide,index=0,boot=on 
-drive file=,if=ide,media=cdrom,index=2 -net 
nic,macaddr=56:16:43:76:ab:09,vlan=0,name=nic.0 -net tap,fd=17,vlan=0
,name=tap.0 -serial none -parallel none -usb -vnc 127.0.0.1:0 -vga cirrus

Migrate instance:
/usr/bin/virsh migrate fqdn.com qemu+ssh://hostb.fqdn.com/system

Migration will complete but the instance seems to be suspended.
On HostB to resume the instance:
/usr/bin/virsh suspend fqdn.com
/usr/bin/virsh resume fqdn.com

Only running resume fqdn.com does nothing.

The Hosts were initialy installed as Ubuntu Jaunty 9.04 and upgraded to
Ubuntu Karmic 9.10 Beta. Maybe this is the problem?

-- 
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 448674] [NEW] VM is suspended after live migrate in Karmic

2009-10-11 Thread EAB
Public bug reported:

Ubuntu Karmic 9.10
libvirt-bin 0.7.0-1ubuntu10
qemu-kvm 0.11.0-0ubuntu1
2.6.31-13-server
VM running Ubuntu Jaunty 9.04

On hostA:
virsh migrate fqdn.com qemu+ssh://hostb.fqdn.com/system
Migration completed in about 8 seconds.

Virsh tells me the VM is running:
virsh list | grep fqdn.com
Connecting to uri: qemu:///system
  1 fqdn.comrunning

The VM seems to be frozen after migration on hostB.
After executing this on hostB the VM is working fine:
virsh suspend fqdn.com
virsh resume fqdn.com

It's expected behavior that the VM is suspended before migration, but it
needs to be resumed when the migration is completed.

** Affects: libvirt (Ubuntu)
 Importance: Undecided
 Status: New

-- 
VM is suspended after live migrate in Karmic
https://bugs.launchpad.net/bugs/448674
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs