Thanks for your answer.

Based on your advices, I improved my shutdown script by doing manually
all actions the engine does while putting an host into maintenance
(via vdsClient : stopping SPM, disconnecting the SP, disconnecting and
unlocking all SD)
So no need to use my previous hacks anymore :) I can post this script
if anyone is interested.

About the delay issue after a reboot, I investigated that a bit more.

It seems to not be related to vdsm and the lockspace, but the engine itself.
 When the engine starts, it probably doesn't know the host was down.
So it doesn't send domain informations and keeps trying to reconstruct
the SPM on the main host until a timeout occurs (found in the engine
logs). It also explains why it works immediately after restarting
vdsm, as the engine see the host down for a couple of time.

My usage is clearly not common but maybe the ideal solution would be
to have a special "maintenance" state to inform the engine the host
will be rebooted while it isn't alive.
But I can live with a small script which restart vdsm once the engine boots.

2016-12-03 20:58 GMT+01:00 Nir Soffer <nsof...@redhat.com>:
> On Sat, Dec 3, 2016 at 6:14 PM, Yoann Laissus <yoann.lais...@gmail.com> wrote:
>> Hello,
>>
>> I'm running into some weird issues with vdsm and my storage domains
>> after a reboot or a shutdown. I can't manage to figure out what's
>> going on...
>>
>> Currently, my cluster (4.0.5 with hosted engine) is composed of one
>> main node. (and another inactive one but unrelated to this issue).
>> It has local storage exposed to oVirt via 3 NFS exports (one specific
>> for the hosted engine vm) reachable from my local network.
>>
>> When I wan't to shutdown or reboot my main host (and so the whole
>> cluster), I use a custom script :
>> 1. Shutdown all VM
>> 2. Shutdown engine VM
>> 3. Stop HA agent and broker
>> 4. Stop vdsmd
>
> This leave vdsm connected to all storage domains, and sanlock is
> still maintaining the lockspace on all storage domains.
>
>> 5. Release the sanlock on the hosted engine SD
>
> You should not do that but use local/global maintenance mode in hosted
> engine agent.
>
>> 6. Shutdown / Reboot
>>
>> It works just fine, but at the next boot, VDSM takes at least 10-15
>> minutes to find storage domains, except the hosted engine one. The
>> engine loops trying to reconstruct the SPM.
>> During this time, vdsClient getConnectedStoragePoolsList returns nothing.
>> getStorageDomainsList returns only the hosted engine domain.
>> NFS exports are mountable from another server.
>
> The correct way to shutdown a host is to move the host to maintenance.
> This deactivate all storage domains on this host, release sanlock leases
> and disconnect from the storage server (e.g. log out from iscsi connection,
> unmount nfs mounts).
>
> If you don't this, sanlock will need more time to join the lockspace in the
> next time.
>
> I'm not sure what is the correct procedure when using hosted engine, since
> hosted engine will not let you put a host into maintenance if the hosted
> engine vm is running on this host. You can stop the hosted engine vm
> but then you cannot move the host into maintenance since you don't have
> engine :-)
>
> There must be a documented way to perform this operation, I hope that
> Simone will point us to the documentation.
>
> Nir
>
>>
>> But when I restart vdsm manually after the boot, it seems to detect
>> immediately the storage domains.
>>
>> Is there some kind of staled storage data used by vdsm and a timeout
>> to invalidate them ?
>> Am I missing something on the vdsm side in my shutdown procedure ?
>>
>> Thanks !
>>
>> Engine and vdsm logs are attached.
>>
>>
>> --
>> Yoann Laissus
>>
>> _______________________________________________
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>



-- 
Yoann Laissus
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to