Hi all, I have a 2 node system running (non HA) on an intel modular server. The configuration is as follows:
3 X 140GB Hard Drives split in RAID 5 for proxmox boot systems 4 X 900GB Hard Drives (with lun share key) for proxmox KVM images. I followed the extending local container storage article here: http://pve.proxmox.com/wiki/Extending_Local_Container_Storage and this one here: http://pve.proxmox.com/wiki/Intel_Modular_Server#Pool2:_Shared_LVM_storage_for_KVM_guests And everything has been going swimmingly. I decided today to update it from the .16 kernel to the .17 kernel. I know if it ain't broke don't fix it. One day I will follow that golden rule. The update on the first node went well. I migrated all the VM's and containers from the first node to the second node before updating and everything is running fine. When I went to restart node1, it wouldn't boot off the new kernel. ------------------------------------------------------------------------------ The error that I get on the boot screen is: Booting 'Proxmox Virtual Environment GNU/Linuc, with' Loading Linux 2.6.23-16.pve error: file not found Loading Initial ramdisk ... error: you need to boot both default and fallback entries ------------------------------------------------------------------------------ I was then able to choose the 2.6.32-11-pve kernel to boot off of and all appeared to be fine, however, when ever I try and update it (aptitude and or apt-get) shows nothing to update. So I manually updated the kernels with the following: apt-get install pve-kernel-2.6.32-16-pve apt-get install pve-headers-2.6.32-16-pve It installed, but I got the following error Generating grub.cfg ... /usr/sbin/grub-probe: error: Couldn't find PV pv1. Check your device.map. -->> I have read from googling that removing /boot/grub/grub.cfg will regenerate the device.map file with an apt-get -f install, but I don't want to do any damage to this. I am also currently afraid to reboot this node server in case it doesn't come back either with the new proper current kernel and or the older kernel. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ root@node1:/boot# apt-get install pve-kernel-2.6.32-16-pve Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: pve-kernel-2.6.32-16-pve 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/31.5 MB of archives. After this operation, 0 B of additional disk space will be used. Selecting previously deselected package pve-kernel-2.6.32-16-pve. (Reading database ... 32700 files and directories currently installed.) Unpacking pve-kernel-2.6.32-16-pve (from .../pve-kernel-2.6.32-16-pve_2.6.32-82_amd64.deb) ... Setting up pve-kernel-2.6.32-16-pve (2.6.32-82) ... update-initramfs: Generating /boot/initrd.img-2.6.32-16-pve Generating grub.cfg ... /usr/sbin/grub-probe: error: Couldn't find PV pv1. Check your device.map. root@node1:/boot# apt-get install pve-headers-2.6.32-16-pve Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: pve-headers-2.6.32-16-pve 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 6,310 kB of archives. After this operation, 0 B of additional disk space will be used. Get:1 http://download.proxmox.com/debian/ squeeze/pve pve-headers-2.6.32-16-pve amd64 2.6.32-82 [6,310 kB] Fetched 6,310 kB in 12s (492 kB/s) Selecting previously deselected package pve-headers-2.6.32-16-pve. (Reading database ... 35125 files and directories currently installed.) Unpacking pve-headers-2.6.32-16-pve (from .../pve-headers-2.6.32-16-pve_2.6.32-82_amd64.deb) ... Setting up pve-headers-2.6.32-16-pve (2.6.32-82) ... root@node1:/boot# uname -a Linux node1 2.6.32-11-pve #1 SMP Wed Apr 11 07:17:05 CEST 2012 x86_64 GNU/Linux ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ The other issue is when I got it back up and running, I tried to migrate one of the windows 2008 servers from the KVMStorage on node2 back to node1 and I got the following error: Jan 31 14:52:43 starting migration of VM 10000 to node 'node1' (192.168.110.2) Jan 31 14:52:43 copying disk images Jan 31 14:52:43 starting VM 10000 on remote node 'node1' Jan 31 14:52:45 starting migration tunnel Jan 31 14:52:45 starting online/live migration on port 60000 Jan 31 14:52:47 migration status: active (transferred 68827161, remaining 4234821632), total 4312137728) ------- [ SNIP SNIP SNIP] ----- Jan 31 14:54:22 migration status: active (transferred 3236421728, remaining 40189952), total 4312137728) Jan 31 14:54:22 migration speed: 42.23 MB/s Jan 31 14:54:22 migration status: completed Jan 31 14:54:23 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' [email protected] qm resume 10000 --skiplock' failed: exit code 2 Jan 31 14:54:25 ERROR: migration finished with problems (duration 00:01:43) TASK ERROR: migration problems Sorry, this is a long email with a couple of issues that have occurred from what was supposed to be a simple update. Thanks to anyone that can help out with my problem... __________ David
_______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
