Hi Strahil,
On Wednesday, 11 December 2019 17:47:18 CET Strahil wrote:
>
> Would you mind to share the list of ovn devices you have.
> Currently in UI, I don't have any network (except ovirtmgmt) and I see
> multiplee devices.
>
> My guess is that I should remove all but the br-int , but I wou
This seems to be a much bigger generic issue with Ansible 2.9. Here is an
excerpt from the release notes:
"Renaming from _facts to _info
Ansible 2.9 renamed a lot of modules from _facts to
_info, because the modules do not return Ansible facts. Ansible
facts relate to a specific host. For exam
On Wednesday, 11 December 2019 16:37:50 CET Dominik Holler wrote:
> On Wed, Dec 11, 2019 at 1:21 PM Pavel Nakonechnyi
>
> > Are there plans to introduce such support? (or explicitly not to..)
>
> The feature is tracked in
> https://bugzilla.redhat.com/1782056
>
> If you would comment on the bu
On Thu, Dec 12, 2019 at 10:06 AM Pavel Nakonechnyi
wrote:
> On Wednesday, 11 December 2019 16:37:50 CET Dominik Holler wrote:
> > On Wed, Dec 11, 2019 at 1:21 PM Pavel Nakonechnyi
> >
>
> > > Are there plans to introduce such support? (or explicitly not to..)
> >
> > The feature is tracked in
>
I have failed on hosted machine IP.
I set up as DHCp with etc/ hots set as well as DNS
can I recover from this or do I need to start again ?
[ INFO ] TASK [ovirt.hosted_engine_setup : Get target engine VM IP address
from VDSM stats]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engin
On Wed, Dec 11, 2019 at 7:46 PM wrote:
> Some documentation, especially on older RHEV versions seems to indicate
> that Gluster storage roles and compute server roles in an oVirt cluster or
> actually exclusive.
>
> Yet HCI is all about doing both, which is slightly confusing when you try
> to ov
On Wed, Dec 11, 2019 at 8:19 PM wrote:
>
> Yes, I have had the same and posted about it here somewhere: I believe it's
> an incompatible Ansible change.
>
> Here is the critical part if the message below:
> "The 'ovirt_host_facts' module has been renamed to 'ovirt_host_info', and the
> renamed o
> On Wed, Dec 11, 2019 at 5:31 PM
> Is VyOS installed on the host, or in a VM?
>
VyOS is installed on the ovirt node
>
>
> Does this mean that the VyOS VM on oVirt should forward layer 2 traffic to
> the VyOS VM on proxmox?
> Is there a way to share a VLAN? (This would avoid additional tunneli
Hi Sahina/Strahil,
We followed the recommended setup from gluster documentation however one of my
colleagues noticed a python entry in the logs, turns out it is a missing sym
link to a library
We created the following symlink to all the master servers (cluster 1 oVirt 1)
and slave servers (Clu
On Thursday, 12 December 2019 10:23:28 CET Dominik Holler wrote:
> On Thu, Dec 12, 2019 at 10:06 AM Pavel Nakonechnyi
> > Could you direct me to the part of oVirt system which handles OVS tunnels
> > creation?
> >
> > It seems that at some point oVirt issues a command similar to the
> > following
On Thu, Dec 12, 2019 at 9:40 AM wrote:
> This seems to be a much bigger generic issue with Ansible 2.9. Here is an
> excerpt from the release notes:
>
> "Renaming from _facts to _info
>
> Ansible 2.9 renamed a lot of modules from _facts to
> _info, because the modules do not return Ansible facts.
On Thu, Dec 12, 2019 at 12:20 PM Pavel Nakonechnyi
wrote:
> On Thursday, 12 December 2019 10:23:28 CET Dominik Holler wrote:
> > On Thu, Dec 12, 2019 at 10:06 AM Pavel Nakonechnyi
> > > Could you direct me to the part of oVirt system which handles OVS
> tunnels
> > > creation?
> > >
> > > It see
Hi Sunny,
Thanks for replying, the issue was solved and added the comments to the
thread
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/ZAN3VFGL347RJZS2XEYR552XBJLYUQVS/#ZAN3VFGL347RJZS2XEYR552XBJLYUQVS
really appreciate you looking into it.
regards,
Adrian
On Thu, Dec 12, 2019 a
Forgot to add the log entry that lead us to the solution for our particular
case:
Log =
/var/log/glusterfs/geo-replication/geo-master_slave1.mydomain2.com_geo-slave/gsyncd.log
-
[2019-12-11 20:37:27.831976] E [syncdu
On Thu, Dec 12, 2019 at 11:29 AM wrote:
> > On Wed, Dec 11, 2019 at 5:31 PM >
> > Is VyOS installed on the host, or in a VM?
> >
> VyOS is installed on the ovirt node
> >
> >
> > Does this mean that the VyOS VM on oVirt should forward layer 2 traffic
> to
> > the VyOS VM on proxmox?
> > Is there
Hello,
hosted engine deployment simply fails on EPYC with 4.3.7.
See my earlier posts "HostedEngine Deployment fails on AMD EPYC 7402P
4.3.7".
I was able to get this up and running by modifying HostedEngine VM XML
while Installation tries to start the engine. Great fun !
virsh -r dumpxml Hosted
> On Thu, Dec 12, 2019 at 11:29 AM
>
> I see.
> This will create an external OVN network.
> As far as I know, OVN networks do not allow mac spoofing, even if port
> security is disabled.
>
I have installed the vdsm hook for allow both promiscuous and mac-spoofing and
have the same experience.
You may just be able to use the username and password already created during
the installation: vdsm@ovirt/shibboleth.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org
On Thursday, 12 December 2019 13:09:39 CET Dominik Holler wrote:
> On Thu, Dec 12, 2019 at 12:20 PM Pavel Nakonechnyi
>
> wrote:
> > On Thursday, 12 December 2019 10:23:28 CET Dominik Holler wrote:
> > > On Thu, Dec 12, 2019 at 10:06 AM Pavel Nakonechnyi
> >
> > What creates all these chassis,
Hi Adrian,
Have you checked the following link:
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/config-backup-recovery
Best Regards,
Strahil NikolovOn Dec 12, 2019
Thanks Martin, that actually helps a lot, because I was afraid of the
implications.
So the error must really be elsewhere. I am having another look at the logs
from the HostedEngineLocal and it seems to complain that no Gluster members are
up, not even the initial one.
I also saw no entries in
What got me derailed was the "ERROR" tag and the fact that it was the last
thing to happen on the outside ("waiting for engine to be up"), while the
HostedEngineLocal on the inside was looking for Gluster members it couldn't
find...
___
Users mailing l
Strahil writes:
> Why do you use 'all_squash' ?
>
> all_squashMap all uids and gids to the anonymous user. Useful for
> NFS-exported public FTP directories, news spool directories, etc. The
> opposite option is no_all_squash, which is the default setting.
AFAIK all_squash,anonuid=36,anongid=36
Hi,
I'm planning to upgrade my installation from CentOS 7.6/oVirt 4.3.5 to
CentOS 7.7/oVirt 4.3.7.
According to
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.7_release_notes/new_features#enhancement_kernel
there's a new default setting for Spectre V2 mitigation
On Thu, Dec 12, 2019 at 4:27 PM wrote:
> > On Thu, Dec 12, 2019 at 11:29 AM >
> >
> > I see.
> > This will create an external OVN network.
> > As far as I know, OVN networks do not allow mac spoofing, even if port
> > security is disabled.
> >
> I have installed the vdsm hook for allow both prom
So in doing some testing, I pulled the plug on my node where the hosted engine
was running. Rough timing was about 3.5 minutes before the portal was available
again.
I searched around first, but could not find if there was any way to speed of
the detection time in order to reboot the hosted eng
> On 12 Dec 2019, at 17:38, Matthias Leopold
> wrote:
>
> Hi,
>
> I'm planning to upgrade my installation from CentOS 7.6/oVirt 4.3.5 to CentOS
> 7.7/oVirt 4.3.7.
> According to
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.7_release_notes/new_features#enhance
Hi Dominik,
Thanks for the reply.
Sadly the openstack module is missing on the engine and I have to figure it out.
Can't I just undeploy the ovn and then redeploy it back ?
Best Regards,
Strahil NikolovOn Dec 12, 2019 09:32, Dominik Holler wrote:
>
> The cleanest way to clean up is to remove a
I'm running a three server HCI. Up and running on 4.3.7 with no problems.
Today I updated to 4.3.8. Engine upgraded fine, rebooted. First host
updated fine, rebooted and let all gluster volumes heal. Put second host
in maintenance, upgraded successfully, rebooted. Waited for gluster
volumes to
On Tue, Dec 10, 2019 at 4:35 PM Robert Webb wrote:
...
> >https://ovirt.org/develop/troubleshooting-nfs-storage-issues.html
> >
> >Generally speaking:
> >
> >Files there are created by vdsm (vdsmd), but are used (when running VMs)
> >by qemu. So both of them need access.
>
> So the link to the NF
On Fri, Dec 13, 2019 at 1:39 AM Nir Soffer wrote:
>
> On Tue, Dec 10, 2019 at 4:35 PM Robert Webb wrote:
>
> ...
> > >https://ovirt.org/develop/troubleshooting-nfs-storage-issues.html
> > >
> > >Generally speaking:
> > >
> > >Files there are created by vdsm (vdsmd), but are used (when running VMs
On Thu, Dec 12, 2019 at 6:36 PM Milan Zamazal wrote:
>
> Strahil writes:
>
> > Why do you use 'all_squash' ?
> >
> > all_squashMap all uids and gids to the anonymous user. Useful for
> > NFS-exported public FTP directories, news spool directories, etc. The
> > opposite option is no_all_squash, w
I was able to get the hosted engine started manually via Virsh after
re-creating a missing symlink in /var/run/vdsm/storage -- I later shut it
down and am still having the same problem with ha broker starting. It
appears that the problem *might* be with a corrupt ha metadata file,
although gluster
Hi Dominik, All,
I've checked
'https://lists.ovirt.org/archives/list/users@ovirt.org/thread/W6U4XJHNMYMD3WIXDCPGOXLW6DFMCYIM/'
and the user managed to clear up and start over.
I have removed the ovn-external-provider from UI, but I forgot to copy the
data from the fields.
Do you know any ref
I believe I was able to get past this by stopping the engine volume then
unmounting the glusterfs engine mount on all hosts and re-starting the
volume. I was able to start hostedengine on host0.
I'm still facing a few problems:
1. I'm still seeing this issue in each host's logs:
Dec 13 00:57:54
On Thu, Dec 12, 2019 at 7:50 PM Strahil wrote:
> Hi Dominik,
>
> Thanks for the reply.
>
> Sadly the openstack module is missing on the engine and I have to figure
> it out.
>
The module can be installed by 'pip install openstacksdk', please find an
example in
https://github.com/oVirt/ovirt-syst
Hi,
We are running CentOS and ovirt 4.3.4. We currently have four nodes and have
set up the networks as follows:
ovirtmanagmeent network - set up as a tagged vlan with a static IP
SAN network - set up as a tagged vlan with a static IP
student network - set up as a tagged vlan
fw network - set up
37 matches
Mail list logo