Re: [PVE-User] pveproxy dying, node unusable

2017-12-11 Thread Lindsay Mathieson
On 12/12/2017 2:14 AM, Emmanuel Kasper wrote: Hi Lindsay As a quick check, is the cluster file system mounted on /etc/pve and can you read files there normally ( ie cat /etc/pve/datacenter.cfg working ) ? Unfortunately I hard reset both nodes as I needed them up. But a pvecm status showed

Re: [PVE-User] pveproxy dying, node unusable

2017-12-11 Thread Emmanuel Kasper
On 12/11/2017 04:50 PM, Lindsay Mathieson wrote: > Also I was unable to connect to the VM's on those nodes, not even via RDP > > On 12/12/2017 1:46 AM, Lindsay Mathieson wrote: >> >> I dist-upraded two nodes yesterday. Now both those nodes have multiple >> unkilliable pveproxy processes. dmesg

Re: [PVE-User] pveproxy dying, node unusable

2017-12-11 Thread Lindsay Mathieson
Also I was unable to connect to the VM's on those nodes, not even via RDP On 12/12/2017 1:46 AM, Lindsay Mathieson wrote: I dist-upraded two nodes yesterday. Now both those nodes have multiple unkilliable pveproxy processes. dmesg has many entries of: [50996.416909] INFO: task

Re: [PVE-User] sparse and compression

2017-12-11 Thread Andreas Herrmann
Hi Migual, first at all: man zfs! On 11.12.2017 13:40, Miguel González wrote: > Is it advisable to use sparse on ZFS pools performance wise? And > compression? Which kind of compression? Sparse or not doesn't matter on SSDs. I would use compression because of less r/w to the disc and modern

[PVE-User] sparse and compression

2017-12-11 Thread Miguel González
Dear all, Is it advisable to use sparse on ZFS pools performance wise? And compression? Which kind of compression? Can I change a zpool to sparse on the fly or do I need to turn off all VMs before doing so? Why a virtual disk shows as 60G when originally It was 36 Gb in raw format? NAME

Re: [PVE-User] Setup default VM ID starting number

2017-12-11 Thread Thomas Lamprecht
Hi, On 12/11/2017 10:31 AM, F.Rust wrote: > Hi all, > > is it possible to set a different starting number for VM ids? No, currently not, I'm afraid. > We have different clusters and don’t want to have overlapping vm ids. > So it would be great to simply say > Cluster 1 start VM-ids at 100 >