> On 6 Apr 2018, at 12:45, Daniel Menzel <daniel.men...@hhi.fraunhofer.de> 
> wrote:
> 
> Hi Michael,
> thanks for your mail. Sorry, I forgot to write that. Yes, we have power 
> management and fencing enabled on all hosts. We also tested this and found 
> out that it works perfectly. So this cannot be the reason I guess.

Hi Daniel,
ok, then it’s worth looking into details. Can you describe in more detail what 
happens? What exact settings you’re using for such VM? Are you killing the HE 
VM or other VMs or both? Would be good to narrow it down a bit and then review 
the exact flow

Thanks,
michal

> 
> Daniel
> 
> 
> 
> On 06.04.2018 11:11, Michal Skrivanek wrote:
>>> On 4 Apr 2018, at 15:36, Daniel Menzel <daniel.men...@hhi.fraunhofer.de> 
>>> wrote:
>>> 
>>> Hello,
>>> 
>>> we're successfully using a setup with 4 Nodes and a replicated Gluster for 
>>> storage. The engine is self hosted. What we're dealing with at the moment 
>>> is the high availability: If a node fails (for example simulated by a 
>>> forced power loss) the engine comes back up online withing ~2min. But 
>>> guests (having the HA option enabled) come back online only after a very 
>>> long grace time of ~5min. As we have a reliable network (40 GbE) and 
>>> reliable servers I think that the default grace times are way too high for 
>>> us - is there any possibility to change those values?
>> And do you have Power Management(iLO, iDRAC,etc) configured for your hosts? 
>> Otherwise we have to resort to relatively long timeouts to make sure the 
>> host is really dead
>> Thanks,
>> michal
>>> 
>>> Thanks in advance!
>>> Daniel
>>> 
>>> _______________________________________________
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>> 
>>> 

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to