On Mon, Apr 23, 2018 at 8:06 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
>
>
> On 23 Apr 2018, at 10:52, Daniel Menzel
> wrote:
>
> Hi Michal,
>
> in your last mail you wrote, that the values can be turned down - how can
> this be done?
> H
>
>
> On 23 Apr 2018, at 10:52, Daniel Menzel
> wrote:
>
> Hi Michal,
>
> in your last mail you wrote, that the values can be turned down - how can
> this be done?
>
>
this is not anything we change very often as it then decreases the system’s
tolerance to
Hi Michal,
in your last mail you wrote, that the values can be turned down - how
can this be done?
Best
Daniel
On 12.04.2018 20:29, Michal Skrivanek wrote:
On 12 Apr 2018, at 13:13, Daniel Menzel
> wrote:
Hi
> On 12 Apr 2018, at 13:13, Daniel Menzel
> wrote:
>
> Hi there,
>
> does anyone have an idea how to decrease a virtual machine's downtime?
>
> Best
> Daniel
>
> On 06.04.2018 13:34, Daniel Menzel wrote:
>> Hi Michal,
>>
>>
Hi Daniel,
adding Martin to
Hi there,
does anyone have an idea how to decrease a virtual machine's downtime?
Best
Daniel
On 06.04.2018 13:34, Daniel Menzel wrote:
Hi Michal,
(sorry for misspelling your name in my first mail).
The settings for the VMs are the following (oVirt 4.2):
1. HA checkbox enabled of course
Hi Michal,
(sorry for misspelling your name in my first mail).
The settings for the VMs are the following (oVirt 4.2):
1. HA checkbox enabled of course
2. "Target Storage Domain for VM Lease" -> left empty
3. "Resume Behavior" -> AUTO_RESUME
4. Priority for Migration -> High
5. "Watchdog
> On 6 Apr 2018, at 12:45, Daniel Menzel
> wrote:
>
> Hi Michael,
> thanks for your mail. Sorry, I forgot to write that. Yes, we have power
> management and fencing enabled on all hosts. We also tested this and found
> out that it works perfectly. So this
Hi Michael,
thanks for your mail. Sorry, I forgot to write that. Yes, we have power
management and fencing enabled on all hosts. We also tested this and
found out that it works perfectly. So this cannot be the reason I guess.
Daniel
On 06.04.2018 11:11, Michal Skrivanek wrote:
On 4 Apr
> On 4 Apr 2018, at 15:36, Daniel Menzel
> wrote:
>
> Hello,
>
> we're successfully using a setup with 4 Nodes and a replicated Gluster for
> storage. The engine is self hosted. What we're dealing with at the moment is
> the high availability: If a node
Hello,
we're successfully using a setup with 4 Nodes and a replicated Gluster
for storage. The engine is self hosted. What we're dealing with at the
moment is the high availability: If a node fails (for example simulated
by a forced power loss) the engine comes back up online withing ~2min.
10 matches
Mail list logo