Hi, we have netapp nfs with ovirt in production and never experienced an outage during takeover/giveback .. - the default ovirt mount options should also handle little NFS timeout (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys) - but to tune it little up you should set disk timeout inside your guest VMs to at least 180 and than you are safe

example:
|cat << EOF >>/etc/rc.d/rc.local # Increasing the timeout value for i in /sys/class/scsi_generic/*/device/timeout; do echo 180 > "\$i"; done EOF |


KR

On 18.04.19 10:45, klaasdem...@gmail.com wrote:
Hi,

I got a question regarding oVirt and the support of NetApp NFS storage. We have a MetroCluster for our virtual machine disks but a HA-Failover of that (active IP gets assigned to another node) seems to produce outages too long for sanlock to handle - that affects all VMs that have storage leases. NetApp says a "worst case" takeover time is 120 seconds. That would mean sanlock has already killed all VMs. Is anyone familiar with how we could setup oVirt to allow such storage outages? Do I need to use another type of storage for my oVirt VMs because that NFS implementation is unsuitable for oVirt?


Greetings

Klaas
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/

--
Ladislav Humenik

System administrator / VI

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6TLJ4KH4P5DH2RZFZUUKUYCY6SFQJHSN/

Reply via email to