Hi,

El 02/02/18 a las 12:14, Martin Maurer escribió:


Eneko Lacunza <elacu...@binovo.es> hat am 2. Februar 2018 um 10:14 geschrieben:
proxmox-ve: 5.1-35 (running kernel: 4.13.13-4-pve)
This kernel produced a lot of crashes (especially on windows), even without any 
migrations.

Please retest with latest kernel.
Just updated the cluster:
# pveversion -v
proxmox-ve: 5.1-38 (running kernel: 4.13.13-5-pve)
pve-manager: 5.1-43 (running version: 5.1-43/bdb08029)
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.4.76-1-pve: 4.4.76-94
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.4.67-1-pve: 4.4.67-92
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-20
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-6
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.4-pve2~bpo9
ceph: 12.2.2-1~bpo90+1

The reported VM has migrated well for now, but I have seen a similar crash with another two Debian 9 VMs this time. Other VMs and SO/distros continue to work well, maybe there's a problem in guest kernel. I continue to get crashes after Intel<>AMD migrations, Intel<->Intel migrations work without issue (done ten's of them during cluster upgrade).

I'll continue to test next week and will report back.

Thanks a lot
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to