You also have fencing policies around the cluster.
Those allow you to disable fencing entirely, so define the criteria for
disabling fencing (do not fence if the host maintains its storage lease,
and do not fence in case more than X% of the hosts in the cluster aren't

On Thu, May 11, 2017 at 8:41 AM, plysan <> wrote:

> did you restart your nfs service or reboot your nfs host during the
> upgrade progress ?
> if you did, what's the webadmin portal's storage domain's status at the
> time ?
> what i'm suspecting is that you were restarting nfs while ovirt-engine is
> using it.
> 2017-05-11 2:02 GMT+08:00 Jason Keltz <>:
>> Hi.
>> I recently upgraded my oVirt infrastructure to the latest
>>, which went smoothly.  Thanks oVirt team! This
>> morning, I upgraded my NFS file server which manages the storage domain.  I
>> stopped ovirt engine, did a yum update to bring the server from its older
>> CentOS 7.2 release to CentOS 7.3, rebooted it, then restarted engine.   At
>> that point, engine was unhappy because our 4 virtualization hosts had a
>> total of 30 VMs all waiting to reconnect to storage.  The status of all the
>> VMs went to unknown in engine.  It took almost 2 hours before everything
>> was completely normal again.  It seems that the hosts were available long
>> before engine updated status.  I'm assuming it's better to restart engine
>> when I know that NFS has resumed on all the 30 virtalized hosts.  However,
>> it's hard to know when that's happened, without trying to connect manually
>> to all the hosts.  Is there a way to warn engine that you're about to mess
>> with the storage domain, and you don't want it to do anything drastic? Sort
>> of like a "maintenance mode" for storage?    I would hate for it to start
>> trying to power off hosts via power management or migrate hosts when it
>> just needs to wait a bit...
>> Thanks!
>> Jason.
>> _______________________________________________
>> Users mailing list
> _______________________________________________
> Users mailing list
Users mailing list

Reply via email to