Thanks for your answer.
The underlying storage type is RAID5 on 6 SSD SAS samsung 850pro + 1 spare:
#  megasasctl
a0       AVAGO 3108 MegaRAID      encl:2 ldrv:1  batt:FAULT, charge failed
a0d0      4766GiB RAID 5   1x6  optimal
unconfigured:  a0e0s6
a0e0s0      953GiB  a0d0  online
a0e0s1      953GiB  a0d0  online
a0e0s2      953GiB  a0d0  online
a0e0s3      953GiB  a0d0  online
a0e0s4      953GiB  a0d0  online
a0e0s5      953GiB  a0d0  online
a0e0s6      953GiB        ready

The Load average is around: 2.33,2.00,1.10
CPU usage is around: 7.34% of 40 CPU(s

The BSOD frequency for VM 109: 02/08 03/08 04/08 06/08 08/08 but not recently
for VM 110: 06/08 08/08 11/08 18/08 25/08 02/09 04/09

the syslog around the last BSOD yesterday, (we tryed several time to stop the VM):
Sep  4 15:47:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:47:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:47:06 pve2 pvedaemon[931]: <root@pam> successful auth for user 'root@pam' Sep  4 15:47:06 pve2 pvedaemon[4779]: <root@pam> successful auth for user 'root@pam'
Sep  4 15:47:15 pve2 pveproxy[39786]: worker 3754 finished
Sep  4 15:47:15 pve2 pveproxy[39786]: starting 1 worker(s)
Sep  4 15:47:15 pve2 pveproxy[39786]: worker 7023 started
Sep  4 15:47:15 pve2 pvedaemon[931]: <root@pam> starting task UPID:pve2:00001B70:32148B26:5B8E8CE3:vncproxy:110:root@pam: Sep  4 15:47:15 pve2 pvedaemon[7024]: starting vnc proxy UPID:pve2:00001B70:32148B26:5B8E8CE3:vncproxy:110:root@pam:
Sep  4 15:47:16 pve2 pveproxy[7022]: worker exit
Sep  4 15:47:29 pve2 pvedaemon[7039]: starting vnc proxy UPID:pve2:00001B7F:321490AA:5B8E8CF1:vncproxy:109:root@pam: Sep  4 15:47:29 pve2 pvedaemon[4545]: <root@pam> starting task UPID:pve2:00001B7F:321490AA:5B8E8CF1:vncproxy:109:root@pam: Sep  4 15:47:33 pve2 pvedaemon[4545]: <root@pam> end task UPID:pve2:00001B7F:321490AA:5B8E8CF1:vncproxy:109:root@pam: OK Sep  4 15:47:38 pve2 pvedaemon[931]: <root@pam> end task UPID:pve2:00001B70:32148B26:5B8E8CE3:vncproxy:110:root@pam: OK Sep  4 15:47:44 pve2 pvedaemon[4779]: <root@pam> starting task UPID:pve2:00001BA0:32149646:5B8E8D00:vncproxy:110:root@pam: Sep  4 15:47:44 pve2 pvedaemon[7072]: starting vnc proxy UPID:pve2:00001BA0:32149646:5B8E8D00:vncproxy:110:root@pam:
Sep  4 15:48:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:48:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:48:36 pve2 pvedaemon[4779]: <root@pam> starting task UPID:pve2:00001BF0:3214AACA:5B8E8D34:qmstart:110:root@pam: Sep  4 15:48:36 pve2 pvedaemon[7152]: start VM 110: UPID:pve2:00001BF0:3214AACA:5B8E8D34:qmstart:110:root@pam:
Sep  4 15:48:36 pve2 pvedaemon[7152]: VM 110 already running
Sep  4 15:48:36 pve2 pvedaemon[4779]: <root@pam> end task UPID:pve2:00001BF0:3214AACA:5B8E8D34:qmstart:110:root@pam: VM 110 already running Sep  4 15:48:37 pve2 pvedaemon[4779]: <root@pam> end task UPID:pve2:00001BA0:32149646:5B8E8D00:vncproxy:110:root@pam: OK Sep  4 15:48:37 pve2 pvedaemon[4545]: <root@pam> starting task UPID:pve2:00001BF7:3214AB38:5B8E8D35:vncproxy:110:root@pam: Sep  4 15:48:37 pve2 pvedaemon[7159]: starting vnc proxy UPID:pve2:00001BF7:3214AB38:5B8E8D35:vncproxy:110:root@pam: Sep  4 15:48:57 pve2 pvedaemon[4545]: <root@pam> end task UPID:pve2:00001BF7:3214AB38:5B8E8D35:vncproxy:110:root@pam: OK Sep  4 15:48:59 pve2 pvedaemon[4779]: <root@pam> starting task UPID:pve2:00001C12:3214B388:5B8E8D4B:vncproxy:110:root@pam: Sep  4 15:48:59 pve2 pvedaemon[7186]: starting vnc proxy UPID:pve2:00001C12:3214B388:5B8E8D4B:vncproxy:110:root@pam:
Sep  4 15:49:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:49:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:50:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:50:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:51:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:51:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:52:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:52:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:52:43 pve2 pvedaemon[931]: <root@pam> starting task UPID:pve2:00001D4D:32150B3B:5B8E8E2B:vncproxy:110:root@pam: Sep  4 15:52:43 pve2 pvedaemon[7501]: starting vnc proxy UPID:pve2:00001D4D:32150B3B:5B8E8E2B:vncproxy:110:root@pam:
Sep  4 15:52:52 pve2 rrdcached[1379]: flushing old values
Sep  4 15:52:52 pve2 rrdcached[1379]: rotating journals
Sep  4 15:52:52 pve2 rrdcached[1379]: started new journal /var/lib/rrdcached/journal/rrd.journal.1536069172.270060 Sep  4 15:52:52 pve2 rrdcached[1379]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1536061972.270091 Sep  4 15:52:52 pve2 pvedaemon[931]: <root@pam> starting task UPID:pve2:00001D56:32150EA8:5B8E8E34:qmreset:110:root@pam: Sep  4 15:52:52 pve2 pvedaemon[931]: <root@pam> end task UPID:pve2:00001D56:32150EA8:5B8E8E34:qmreset:110:root@pam: OK
Sep  4 15:53:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:53:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:53:57 pve2 pvedaemon[7600]: starting vnc proxy UPID:pve2:00001DB0:321527E9:5B8E8E75:vncproxy:110:root@pam: Sep  4 15:53:57 pve2 pvedaemon[4545]: <root@pam> starting task UPID:pve2:00001DB0:321527E9:5B8E8E75:vncproxy:110:root@pam: Sep  4 15:53:59 pve2 pvedaemon[4545]: <root@pam> end task UPID:pve2:00001DB0:321527E9:5B8E8E75:vncproxy:110:root@pam: OK
Sep  4 15:54:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:54:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:54:08 pve2 pvedaemon[931]: <root@pam> starting task UPID:pve2:00001DC1:32152C73:5B8E8E80:qmshutdown:110:root@pam: Sep  4 15:54:08 pve2 pvedaemon[7617]: shutdown VM 110: UPID:pve2:00001DC1:32152C73:5B8E8E80:qmshutdown:110:root@pam:
Sep  4 15:54:19 pve2 pveproxy[39786]: worker 4697 finished
Sep  4 15:54:19 pve2 pveproxy[39786]: starting 1 worker(s)
Sep  4 15:54:19 pve2 pveproxy[39786]: worker 7631 started
Sep  4 15:54:23 pve2 pveproxy[7630]: got inotify poll request in wrong process - disabling inotify
Sep  4 15:55:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:55:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:56:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:56:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:56:20 pve2 pvedaemon[4545]: <root@pam> successful auth for user 'root@pam'
Sep  4 15:57:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:57:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:57:20 pve2 pvedaemon[4545]: <root@pam> successful auth for user 'root@pam' Sep  4 15:57:20 pve2 pvedaemon[931]: <root@pam> successful auth for user 'root@pam'
Sep  4 15:57:23 pve2 pveproxy[39786]: worker 5535 finished
Sep  4 15:57:23 pve2 pveproxy[39786]: starting 1 worker(s)
Sep  4 15:57:23 pve2 pveproxy[39786]: worker 7920 started
Sep  4 15:57:25 pve2 pveproxy[7919]: got inotify poll request in wrong process - disabling inotify
Sep  4 15:57:25 pve2 pveproxy[7919]: worker exit
Sep  4 15:57:40 pve2 pvedaemon[4545]: <root@pam> starting task UPID:pve2:00001F03:32157F30:5B8E8F54:vncproxy:110:root@pam: Sep  4 15:57:40 pve2 pvedaemon[7939]: starting vnc proxy UPID:pve2:00001F03:32157F30:5B8E8F54:vncproxy:110:root@pam:
Sep  4 15:58:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:58:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:59:00 pve2 systemd[1]: Starting Proxmox VE replication runner...
Sep  4 15:59:00 pve2 systemd[1]: Started Proxmox VE replication runner.
Sep  4 15:59:38 pve2 pvedaemon[4545]: <root@pam> end task UPID:pve2:00001F03:32157F30:5B8E8F54:vncproxy:110:root@pam: OK Sep  4 15:59:43 pve2 pvedaemon[8114]: starting vnc proxy UPID:pve2:00001FB2:3215AF0C:5B8E8FCF:vncproxy:110:root@pam: Sep  4 15:59:43 pve2 pvedaemon[4779]: <root@pam> starting task UPID:pve2:00001FB2:3215AF0C:5B8E8FCF:vncproxy:110:root@pam: Sep  4 15:59:44 pve2 pvedaemon[4779]: <root@pam> end task UPID:pve2:00001FB2:3215AF0C:5B8E8FCF:vncproxy:110:root@pam: OK

the kern.log:
Sep  4 15:47:06 pve2 pvedaemon[4779]: <root@pam> successful auth for user 'root@pam' Sep  4 15:47:15 pve2 pvedaemon[931]: <root@pam> starting task UPID:pve2:00001B70:32148B26:5B8E8CE3:vncproxy:110:root@pam: Sep  4 15:47:29 pve2 pvedaemon[4545]: <root@pam> starting task UPID:pve2:00001B7F:321490AA:5B8E8CF1:vncproxy:109:root@pam: Sep  4 15:47:33 pve2 pvedaemon[4545]: <root@pam> end task UPID:pve2:00001B7F:321490AA:5B8E8CF1:vncproxy:109:root@pam: OK Sep  4 15:47:38 pve2 pvedaemon[931]: <root@pam> end task UPID:pve2:00001B70:32148B26:5B8E8CE3:vncproxy:110:root@pam: OK Sep  4 15:47:44 pve2 pvedaemon[4779]: <root@pam> starting task UPID:pve2:00001BA0:32149646:5B8E8D00:vncproxy:110:root@pam: Sep  4 15:48:36 pve2 pvedaemon[4779]: <root@pam> starting task UPID:pve2:00001BF0:3214AACA:5B8E8D34:qmstart:110:root@pam: Sep  4 15:48:36 pve2 pvedaemon[4779]: <root@pam> end task UPID:pve2:00001BF0:3214AACA:5B8E8D34:qmstart:110:root@pam: VM 110 already running Sep  4 15:48:37 pve2 pvedaemon[4779]: <root@pam> end task UPID:pve2:00001BA0:32149646:5B8E8D00:vncproxy:110:root@pam: OK Sep  4 15:48:37 pve2 pvedaemon[4545]: <root@pam> starting task UPID:pve2:00001BF7:3214AB38:5B8E8D35:vncproxy:110:root@pam: Sep  4 15:48:57 pve2 pvedaemon[4545]: <root@pam> end task UPID:pve2:00001BF7:3214AB38:5B8E8D35:vncproxy:110:root@pam: OK Sep  4 15:48:59 pve2 pvedaemon[4779]: <root@pam> starting task UPID:pve2:00001C12:3214B388:5B8E8D4B:vncproxy:110:root@pam: Sep  4 15:52:43 pve2 pvedaemon[931]: <root@pam> starting task UPID:pve2:00001D4D:32150B3B:5B8E8E2B:vncproxy:110:root@pam: Sep  4 15:52:52 pve2 pvedaemon[931]: <root@pam> starting task UPID:pve2:00001D56:32150EA8:5B8E8E34:qmreset:110:root@pam: Sep  4 15:52:52 pve2 pvedaemon[931]: <root@pam> end task UPID:pve2:00001D56:32150EA8:5B8E8E34:qmreset:110:root@pam: OK Sep  4 15:53:57 pve2 pvedaemon[4545]: <root@pam> starting task UPID:pve2:00001DB0:321527E9:5B8E8E75:vncproxy:110:root@pam: Sep  4 15:53:59 pve2 pvedaemon[4545]: <root@pam> end task UPID:pve2:00001DB0:321527E9:5B8E8E75:vncproxy:110:root@pam: OK Sep  4 15:54:08 pve2 pvedaemon[931]: <root@pam> starting task UPID:pve2:00001DC1:32152C73:5B8E8E80:qmshutdown:110:root@pam: Sep  4 15:56:20 pve2 pvedaemon[4545]: <root@pam> successful auth for user 'root@pam' Sep  4 15:57:20 pve2 pvedaemon[4545]: <root@pam> successful auth for user 'root@pam' Sep  4 15:57:20 pve2 pvedaemon[931]: <root@pam> successful auth for user 'root@pam' Sep  4 15:57:40 pve2 pvedaemon[4545]: <root@pam> starting task UPID:pve2:00001F03:32157F30:5B8E8F54:vncproxy:110:root@pam: Sep  4 15:59:38 pve2 pvedaemon[4545]: <root@pam> end task UPID:pve2:00001F03:32157F30:5B8E8F54:vncproxy:110:root@pam: OK Sep  4 15:59:43 pve2 pvedaemon[4779]: <root@pam> starting task UPID:pve2:00001FB2:3215AF0C:5B8E8FCF:vncproxy:110:root@pam: Sep  4 15:59:44 pve2 pvedaemon[4779]: <root@pam> end task UPID:pve2:00001FB2:3215AF0C:5B8E8FCF:vncproxy:110:root@pam: OK
I don't see any thing in syslog & kern.log
I haven't try switching the vdisk temporarily to IDE, but I don't know what else to do...
signature Cordialement.
Vincent MALIEN
Le 05/09/2018 à 09:33, Yannis Milios a écrit :
If both VMs fail with a BSOD, then definitely something must be wrong
somewhere.
Win2016 is supported in PVE 5+, so don't think it's necessary to upgrade to
a newer version.
I would focus my attention on  any potential hardware issues on the actual
host (RAM,Storage etc).
What's your underlying storage type (RAID,SSD,HDD) ? What are the load
average values on the host ?
Any clues in the Syslog ? Have you tried switching the vdisk temporarily to
IDE (even though, I don't think that will help in your case).



On Wed, 5 Sep 2018 at 08:04, Vincent Malien <[email protected]> wrote:

Hi pve users,
I run 2 VM using windows 2016 witch do often blue screen and today this
message: guest has not initialize the display (yet)
here is my config:
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-35 (running version: 5.1-35/722cc488)
pve-kernel-4.13.4-1-pve: 4.13.4-25
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.2-pve1~bpo90

qm config of 1VM:
agent: 1
bootdisk: scsi0
cores: 4
ide0: none,media=cdrom
memory: 12288
name: srverp
net0: virtio=F2:30:F0:DE:09:1F,bridge=vmbr0
numa: 0
ostype: win10
scsi0: local-lvm:vm-110-disk-1,discard=on,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=51c201a6-cd20-488c-9c89-f3f0fe4abd06
sockets: 1

virtio is virtio-win-0.1.141
I checked the VM disk with windows tool, no error.
should I upgrade to 5.2 or some thing else?

--
Cordialement.
Vincent MALIEN
/12 Avenue Yves Farge
BP 20258
37702 St Pierre des Corps cedex 2/
_______________________________________________
pve-user mailing list
[email protected]
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

_______________________________________________
pve-user mailing list
[email protected]
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

_______________________________________________
pve-user mailing list
[email protected]
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to