Also, and more seriously, its seems to bolix the cluster - all nodes start showing with red dots and the VM lists doesn't show any names. So far the only fix I have is to reboot every node.
Oddly it only seems that pve services are effected. ceph etc keeps working, I can still ssh onto each node. On 22 October 2015 at 21:56, Lindsay Mathieson <[email protected]> wrote: > Proxmox 3.4 cluster with 3.10 > > I'm currently trying to test gluster 3.7.5 using 3 debian 8.2 VM's. The > VM's are setup with 3.10.0-13-pve kernel > - kernel 4.2 > - Gluster 3.7.5 > - A replica 3 sharded datastore > - 1 virtual nic per virtual gluster node > > The gluster datastore is exposed via gluster NFS to prxomox storage so I > can test running VM's off it. > > But whenever I try to move a VM disk to the gluster storage (via the NFS > share) the nic on the VM just stops responding and its IP drops off the > network. Sometime its only happens after 23GB, once got as far as 30GB. > > However the VM is still there and I can use the novnc console to access > it and logon. Its still thinks its nic is working but it can't ping > anything. > > > I've tried with VIRTIO and e1000. > > Any suggets as to what to look at next? > > > -- > Lindsay > -- Lindsay
_______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
