Team,
I have issues like this with bonding interface.
It was problem with SWITCH side.
BR,
Deepwan Inc
On Wed, Oct 30, 2019 at 9:26 PM +0300, "ccesa...@blueit.com.br"
mailto:ccesa...@blueit.com.br>> wrote:
Hi Edward,
My Ovirt version is
Node
OS Version: RHEL - 7 -
Hi
Am Sa., 2. Nov. 2019 um 08:51 Uhr schrieb Strahil Nikolov
:
> Have you tried with another ISO ?
This is quite weird, still better than my initial testing but here it
goes. I've slimmed down my attempts to Q35 and UEFI (both on and off):
- Server 2019 ISO boots into installer with or without
Hello all.
A had a big problem in my company. We had electircal problem and I'v lost
access to my iscsi storage. After reinstall the hosted engine a added the
storage, but no one virtual disk was faund there.
It is possible to recovery it?
Best regards.
--
Atenciosamente,
Kalil de A. Carvalho
Hi Edward,
Yes, it is disabled .
As screenshot.
https://pasteboard.co/IEZS49Z.png
And I already tested some filter as clean-traffic-gateway, but no success.
Any other idea?
Regards
Carlos
___
Users mailing list -- users@ovirt.org
To unsubscribe send
Hi,
Can you give a try of the workaround in
https://bugzilla.redhat.com/show_bug.cgi?id=1727987#c0 ?
At least it works for RHEL 8 (and most probably CentOS 8).
Best Regards,
Strahil NikolovOn Nov 3, 2019 12:14, Mathieu Simon
wrote:
>
> Hi
>
> Am Sa., 2. Nov. 2019 um 08:51 Uhr schrieb
on your iscsi storage can you see the partitions?
blkid or pvs, lvs??
Kalil de A. Carvalho schrieb am So., 3. Nov. 2019,
17:12:
> Hello all.
> A had a big problem in my company. We had electircal problem and I'v lost
> access to my iscsi storage. After reinstall the hosted engine a added the
>
On 11/3/2019 12:52 AM, TomK wrote:
On 11/2/2019 4:07 AM, Strahil wrote:
You should be able to do that with the POSIX compliant domain ...
If nit, it's better to open a bug so this behaviour is investigated
further.
I haven't tried the POXIS compliant domain but from what I recall and
I thought that was it.
I remembered some experience I had with a test install that recommended
turning the network filter off.
You probably already did this, but when you turn off filtering or make
other changes
to the logical network like MTU size you must completely shutdown the
attached VMs
On 11/3/2019 6:24 PM, TomK wrote:
On 11/3/2019 12:52 AM, TomK wrote:
On 11/2/2019 4:07 AM, Strahil wrote:
You should be able to do that with the POSIX compliant domain ...
If nit, it's better to open a bug so this behaviour is investigated
further.
I haven't tried the POXIS compliant
While trying to deploy using the WEB UI for single hyperconverged setup I run
into this issue (ovirt 4.3.5, ovirt 4.3.6).
mount point is "/gluster_bricks/engine" however "/gluster_bricks/engine/engine"
is not created.
Read a post about gluster 6.1 and having to add
10 matches
Mail list logo