[ovirt-users] Re: Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-22 Thread Strahil Nikolov via Users
As  I told you, you could just downgrade gluster on all nodes and later plan to 
live migrate the VM disks.
I had to copy my data to the new volume, so I can avoid the ACL bug , when I 
use newer versions of gluster.


Let's clarify some details:
1. Which version of oVirt and Gluster are you using ?
2. You now have your old  gluster volume attached  to oVirt and the new volume 
unused, right ?
3. Did you copy the contents of the old volume to the new one ?

Best Regards,
Strahil Nikolov

На 23 юни 2020 г. 4:34:19 GMT+03:00, C Williams  
написа:
>Strahil,
>
>Thank You For Help !
>
>Downgrading Gluster to 6.5 got the original storage domain working
>again !
>
>After, I finished my copy of the contents of the problematic volume to
>a
>new volume, I did the following
>
>Unmounted the mount points
>Stopped the original problematic Gluster volume
>On each problematic peer,
>I downgraded Gluster to 6.5
>(yum downgrade glusterfs-6.5-1.el7.x86_64
>vdsm-gluster-4.30.46-1.el7.x86_64
>python2-gluster-6.5-1.el7.x86_64 glusterfs-libs-6.5-1.el7.x86_64
>glusterfs-cli-6.5-1.el7.x86_64 glusterfs-fuse-6.5-1.el7.x86_64
>glusterfs-rdma-6.5-1.el7.x86_64 glusterfs-api-6.5-1.el7.x86_64
>glusterfs-server-6.5-1.el7.x86_64 glusterfs-events-6.5-1.el7.x86_64
>glusterfs-client-xlators-6.5-1.el7.x86_64
>glusterfs-geo-replication-6.5-1.el7.x86_64)
>Restarted glusterd (systemctl restart glusterd)
>Restarted the problematic Gluster  volume
>Reattached the problematic storage domain
>Started the problematic storage domain
>
>Things work now. I can now run VMs and write data, copy virtual disks,
>move
>virtual disks to other storage domains, etc.
>
>I am very thankful that the storage domain is working again !
>
>How can I safely perform upgrades on Gluster ! When will it be safe to
>do
>so ?
>
>Thank You Again For Your Help !
>
>On Mon, Jun 22, 2020 at 10:58 AM C Williams 
>wrote:
>
>> Strahil,
>>
>> I have downgraded the target. The copy from the problematic volume to
>the
>> target is going on now.
>> Once I have the data copied, I might downgrade the problematic
>volume's
>> Gluster to 6.5.
>> At that point I might reattach the original ovirt domain and see if
>it
>> will work again.
>> But the copy is going on right now.
>>
>> Thank You For Your Help !
>>
>> On Mon, Jun 22, 2020 at 10:52 AM Strahil Nikolov
>
>> wrote:
>>
>>> You should ensure  that in the storage domain tab, the old  storage
>is
>>> not visible.
>>>
>>> I  still wander why yoiu didn't try to downgrade first.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams
>
>>> написа:
>>> >Strahil,
>>> >
>>> >The GLCL3 storage domain was detached prior to attempting to add
>the
>>> >new
>>> >storage domain.
>>> >
>>> >Should I also "Remove" it ?
>>> >
>>> >Thank You For Your Help !
>>> >
>>> >-- Forwarded message -
>>> >From: Strahil Nikolov 
>>> >Date: Mon, Jun 22, 2020 at 12:50 AM
>>> >Subject: Re: [ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain
>>> >To: C Williams 
>>> >Cc: users 
>>> >
>>> >
>>> >You can't add the new volume as it contains the same data (UUID) as
>the
>>> >old
>>> >one , thus you need to detach the old one before adding the new one
>-
>>> >of
>>> >course this means downtime for all VMs on that storage.
>>> >
>>> >As you see , downgrading is more simpler. For me v6.5 was working,
>>> >while
>>> >anything above (6.6+) was causing complete lockdown.  Also v7.0 was
>>> >working, but it's supported  in oVirt 4.4.
>>> >
>>> >Best Regards,
>>> >Strahil Nikolov
>>> >
>>> >На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams
>>> >
>>> >написа:
>>> >>Another question
>>> >>
>>> >>What version could I downgrade to safely ? I am at 6.9 .
>>> >>
>>> >>Thank You For Your Help !!
>>> >>
>>> >>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
>>> >>
>>> >>wrote:
>>> >>
>>> >>> You are definitely reading it wrong.
>>> >>> 1. I didn't create a new storage  domain ontop this new volume.
>>> >>> 2. I used cli
>>> >>>
>>> >>> Something like this  (in your case it should be 'replica 3'):
>>> >>> gluster volume create newvol replica 3 arbiter 1
>>> >>ovirt1:/new/brick/path
>>> >>> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
>>> >>> gluster volume start newvol
>>> >>>
>>> >>> #Detach oldvol from ovirt
>>> >>>
>>> >>> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
>>> >>> mount -t glusterfs ovirt1:/newvol /mnt/newvol
>>> >>> cp -a /mnt/oldvol/* /mnt/newvol
>>> >>>
>>> >>> #Add only newvol as a storage domain in oVirt
>>> >>> #Import VMs
>>> >>>
>>> >>> I still think that you should downgrade your gluster packages!!!
>>> >>>
>>> >>> Best Regards,
>>> >>> Strahil Nikolov
>>> >>>
>>> >>> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
>>> >>
>>> >>> написа:
>>> >>> >Strahil,
>>> >>> >
>>> >>> >It sounds like  you used a "System Managed Volume" for the new
>>> >>storage
>>> >>> >domain,is that correct?
>>> >>> >
>>> >>> >Thank You For Your Help !
>>> >>> >
>>> >>> >On Sun, Jun 21, 2020 at 5:40 PM C Williams
>>> >

[ovirt-users] Re: oVirt install questions

2020-06-22 Thread Strahil Nikolov via Users
Hey David,

keep in mind that you  need some big NICs.
I started  my oVirt lab with 1 Gbit  NIC and later added 4 dual-port  1 Gbit  
NICs  and I had to create multiple  gluster volumes  and multiple storage 
domains.
Yet,  windows  VMs cannot use software  raid  for boot devices,  thus it's a  
pain in the @$$.
I think that optimal is to have several 10Gbit NICs (at least  1  for gluster 
and 1 for oVirt live migration).
Also,  NVMEs  can be used  as lvm cache for spinning disks.

Best Regards,
Strahil  Nikolov

На 22 юни 2020 г. 18:50:01 GMT+03:00, David White  
написа:
>> For migration between hosts you need a shared storage. SAN, Gluster,
>CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little
>bit experimental).
>
>Sounds like I'll be using NFS or Gluster after all.
>Thank you.
>
>> The engine is just a management layer. KVM/qemu has that option a
>long time ago, yet it's some manual work to do it.
>Yeah, this environment that I'm building is expected to grow over time
>(although that growth could go slowly), so I'm trying to architect
>things properly now to make future growth easier to deal with. I'm also
>trying to balance availability concerns with budget constraints
>starting out.
>
>Given that NFS would also be a single point of failure, I'll probably
>go with Gluster, as long as I can fit the storage requirements into the
>overall budget.
>
>
>Sent with ProtonMail Secure Email.
>
>‐‐‐ Original Message ‐‐‐
>On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users
> wrote:
>
>> На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via
>usersus...@ovirt.org написа:
>> 
>
>> > Thank you and Strahil for your responses.
>> > They were both very helpful.
>> > 
>
>> > > I think a hosted engine installation VM wants 16GB RAM configured
>> > > though I've built older versions with 8GB RAM.
>> > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
>> > > CentOS7 was OK with 1, CentOS6 maybe 512K.
>> > > The tendency is always increasing with updated OS versions.
>> > 
>
>> > Ok, so to clarify my question a little bit, I'm trying to figure
>out
>> > how much RAM I would need to reserve for the host OS (or oVirt
>Node).
>> > I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps
>> > that would suffice?
>> > And then as you noted, I would need to plan to give the engine
>16GB.
>> 
>
>> I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the
>larger the setup - the more ram for the engine is needed.
>> 
>
>> > > My minimum ovirt systems were mostly 48GB 16core, but most are
>now
>> > > 128GB 24core or more.
>> > 
>
>> > But this is the total amount of physical RAM in your systems,
>correct?
>> > Not the amount that you've reserved for your host OS?I've spec'd
>out
>> > some hardware, and am probably looking at purchasing two PowerEdge
>> > R820's to start, each with 64GB RAM and 32 cores.
>> > 
>
>> > > While ovirt can do what you would like it to do concerning a
>single
>> > > user interface, but with what you listed,
>> > > you're probably better off with just plain KVM/qemu and using
>> > > virt-manager for the interface.
>> > 
>
>> > Can you migrate VMs from 1 host to another with virt-manager, and
>can
>> > you take snapshots?
>> > If those two features aren't supported by virt-manager, then that
>would
>> > almost certainly be a deal breaker.
>> 
>
>> The engine is just a management layer. KVM/qemu has that option a
>long time ago, yet it's some manual work to do it.
>> 
>
>> > Come to think of it, if I decided to use local storage on each of
>the
>> > physical hosts, would I be able to migrate VMs? 
>> > Or do I have to use a Gluster or NFS store for that?
>> 
>
>> For migration between hosts you need a shared storage. SAN, Gluster,
>CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little
>bit experimental).
>> 
>
>> > ‐‐‐ Original Message ‐‐‐
>> > On Sunday, June 21, 2020 5:58 PM, Edward Berger edwber...@gmail.com
>> > wrote:
>> > 
>
>> > > While ovirt can do what you would like it to do concerning a
>single
>> > > user interface, but with what you listed,
>> > > you're probably better off with just plain KVM/qemu and using
>> > > virt-manager for the interface.
>> > 
>
>> > > Those memory/cpu requirements you listed are really tiny and I
>> > > wouldn't recommend even trying ovirt on such challenged systems.
>> > > I would specify at least 3 hosts for a gluster hyperconverged
>system,
>> > > and a spare available that can take over if one of the hosts
>dies.
>> > 
>
>> > > I think a hosted engine installation VM wants 16GB RAM configured
>> > > though I've built older versions with 8GB RAM.
>> > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
>> > > CentOS7 was OK with 1, CentOS6 maybe 512K.
>> > > The tendency is always increasing with updated OS versions.
>> > 
>
>> > > My minimum ovirt systems were mostly 48GB 16core, but most are
>now
>> > > 128GB 24core or more.
>> > 
>
>> > > ovirt node 

[ovirt-users] Re: oVirt install questions

2020-06-22 Thread David White via Users
> For migration between hosts you need a shared storage. SAN, Gluster, CEPH, 
> NFS, iSCSI are among the ones already supported (CEPH is a little bit 
> experimental).

Sounds like I'll be using NFS or Gluster after all.
Thank you.

> The engine is just a management layer. KVM/qemu has that option a long time 
> ago, yet it's some manual work to do it.
Yeah, this environment that I'm building is expected to grow over time 
(although that growth could go slowly), so I'm trying to architect things 
properly now to make future growth easier to deal with. I'm also trying to 
balance availability concerns with budget constraints starting out.

Given that NFS would also be a single point of failure, I'll probably go with 
Gluster, as long as I can fit the storage requirements into the overall budget.


Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users  
wrote:

> На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via usersus...@ovirt.org 
> написа:
> 

> > Thank you and Strahil for your responses.
> > They were both very helpful.
> > 

> > > I think a hosted engine installation VM wants 16GB RAM configured
> > > though I've built older versions with 8GB RAM.
> > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
> > > CentOS7 was OK with 1, CentOS6 maybe 512K.
> > > The tendency is always increasing with updated OS versions.
> > 

> > Ok, so to clarify my question a little bit, I'm trying to figure out
> > how much RAM I would need to reserve for the host OS (or oVirt Node).
> > I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps
> > that would suffice?
> > And then as you noted, I would need to plan to give the engine 16GB.
> 

> I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the larger the 
> setup - the more ram for the engine is needed.
> 

> > > My minimum ovirt systems were mostly 48GB 16core, but most are now
> > > 128GB 24core or more.
> > 

> > But this is the total amount of physical RAM in your systems, correct?
> > Not the amount that you've reserved for your host OS?I've spec'd out
> > some hardware, and am probably looking at purchasing two PowerEdge
> > R820's to start, each with 64GB RAM and 32 cores.
> > 

> > > While ovirt can do what you would like it to do concerning a single
> > > user interface, but with what you listed,
> > > you're probably better off with just plain KVM/qemu and using
> > > virt-manager for the interface.
> > 

> > Can you migrate VMs from 1 host to another with virt-manager, and can
> > you take snapshots?
> > If those two features aren't supported by virt-manager, then that would
> > almost certainly be a deal breaker.
> 

> The engine is just a management layer. KVM/qemu has that option a long time 
> ago, yet it's some manual work to do it.
> 

> > Come to think of it, if I decided to use local storage on each of the
> > physical hosts, would I be able to migrate VMs? 
> > Or do I have to use a Gluster or NFS store for that?
> 

> For migration between hosts you need a shared storage. SAN, Gluster, CEPH, 
> NFS, iSCSI are among the ones already supported (CEPH is a little bit 
> experimental).
> 

> > ‐‐‐ Original Message ‐‐‐
> > On Sunday, June 21, 2020 5:58 PM, Edward Berger edwber...@gmail.com
> > wrote:
> > 

> > > While ovirt can do what you would like it to do concerning a single
> > > user interface, but with what you listed,
> > > you're probably better off with just plain KVM/qemu and using
> > > virt-manager for the interface.
> > 

> > > Those memory/cpu requirements you listed are really tiny and I
> > > wouldn't recommend even trying ovirt on such challenged systems.
> > > I would specify at least 3 hosts for a gluster hyperconverged system,
> > > and a spare available that can take over if one of the hosts dies.
> > 

> > > I think a hosted engine installation VM wants 16GB RAM configured
> > > though I've built older versions with 8GB RAM.
> > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
> > > CentOS7 was OK with 1, CentOS6 maybe 512K.
> > > The tendency is always increasing with updated OS versions.
> > 

> > > My minimum ovirt systems were mostly 48GB 16core, but most are now
> > > 128GB 24core or more.
> > 

> > > ovirt node ng is a prepackaged installer for an oVirt
> > > hypervisor/gluster host, with its cockpit interface you can create and
> > > install the hosted-engine VM for the user and admin web interface.  Its
> > > very good on enterprise server hardware with lots of RAM,CPU, and
> > > DISKS.
> > 

> > > On Sun, Jun 21, 2020 at 4:34 PM David White via Users
> > > users@ovirt.org wrote:
> > 

> > > > I'm reading through all of the documentation at
> > > > https://ovirt.org/documentation/, and am a bit overwhelmed with all of
> > > > the different options for installing oVirt.
> > > 

> > > > 

> > 

> > > > My particular use case is that I'm looking for a way to manage VMs
> > > > on 

[ovirt-users] Re: 4.4.1-rc5: Looking for correct way to configure machine=q35 instead of machine=pc for arch=x86_64

2020-06-22 Thread Glenn Marcy
I wonder if this openstack/nova defect is applicable to ovirt/vdsm since
it is also a client of that libvirt API?

https://opendev.org/openstack/nova/commit/a53c867913ff364c789aba1f7255dfcc68ff9f85

Regards,

Glenn Marcy

Sandro Bonazzola  wrote on 06/22/2020 04:49:13 AM:

> From: Sandro Bonazzola 
> To: Glenn Marcy , Asaf Rachmani 
> , Evgeny Slutsky 
> Cc: users 
> Date: 06/22/2020 04:52 AM
> Subject: [EXTERNAL] [ovirt-users] Re: 4.4.1-rc5: Looking for correct
> way to configure machine=q35 instead of machine=pc for arch=x86_64
> 
> +Asaf Rachmani , +Evgeny Slutsky can you please investigate?
> 
> Il giorno lun 22 giu 2020 alle ore 08:07 Glenn Marcy  > ha scritto:
> Hello, I am hoping for some insight from folks with more hosted 
> engine install experience.
> 
> When I try to install the hosted engine using the RC5 dist I get the
> following error during the startup
> of the HostedEngine VM:
> 
>   XML error: The PCI controller with index='0' must be model='pci-
> root' for this machine type, but model='pcie-root' was found instead
> 
> This is due to the HE Domain XML description using machine="pc-
> i440fx-rhel7.6.0".
> 
> I've tried to override the default of 'pc' from ovirt-ansible-
> hosted-engine-setup/defaults/main.yml:
> 
>   he_emulated_machine: pc
> 
> by passing to the ovirt-hosted-engine-setup script a --config-
> append=file parameter where file contains:
> 
>   [environment:default]
>   OVEHOSTED_VM/emulatedMachine=str:q35
> 
> When the "Create ovirt-hosted-engine-ha run directory" step finishes
> the vm.conf file contains:
> 
> cpuType=IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear
> emulatedMachine=q35
> 
> At the "Start ovirt-ha-broker service on the host" step that file is
> removed.  When that file appears
> again during the "Check engine VM health" step it now contains:
> 
> cpuType=IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear
> emulatedMachine=pc-i440fx-rhel7.6.0
> 
> After that the install fails with the metadata from "virsh dumpxml 
> HostedEngine" containing:
> 
> 1
> XML error: The PCI controller with 
> index='0' must be model='pci-root' for this machine type, but 
> model='pcie-root' was found instead
> 
> Interestingly enough, the HostedEngineLocal VM that is running the 
> appliance image has the value I need:
> 
>   hvm
> 
> Does anyone on the list have any experience with where this needs to
> be overridden?  Somewhere in the
> hosted engine setup or do I need to do something at a deeper level 
> like vdsm or libvirt?
> 
> Help much appreciated !
> 
> Thanks,
> Glenn
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/
> community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/
> users@ovirt.org/message/2S5NKX4L7VUYGMEAPKT553IBFAYZZESD/
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA
> sbona...@redhat.com   
> 
> [image removed] 
> 
> Red Hat respects your work life balance. Therefore there is no need 
> to answer this email out of your office hours.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://urldefense.proofpoint.com/v2/url?
> 
u=https-3A__www.ovirt.org_privacy-2Dpolicy.html=DwIGaQ=jf_iaSHvJObTbx-
> siA1ZOg=7CLmQhnB2i8fvxtmu_t4kAY5P_X7VyLhPZG9j6YjHak=-JJI4DBHi9Q-
> kVIoqpavNyeLpU9NGoyhoUNlwk7zDFc=x7Dyn-
> w0xcJQ7hq39_6qq8_jMtMp7tbs6RBLhUBWW-s= 
> oVirt Code of Conduct: https://urldefense.proofpoint.com/v2/url?
> 
u=https-3A__www.ovirt.org_community_about_community-2Dguidelines_=DwIGaQ=jf_iaSHvJObTbx-
> siA1ZOg=7CLmQhnB2i8fvxtmu_t4kAY5P_X7VyLhPZG9j6YjHak=-JJI4DBHi9Q-
> 
kVIoqpavNyeLpU9NGoyhoUNlwk7zDFc=9G0hbPahmHmMuXMH2B0JPNDyApbBHGLrhMBBXpRIemE=
> List Archives: https://urldefense.proofpoint.com/v2/url?
> 
u=https-3A__lists.ovirt.org_archives_list_users-40ovirt.org_message_IM3EQSBHBTORQZM5MAHPOWKYUXIKZCHQ_=DwIGaQ=jf_iaSHvJObTbx-
> siA1ZOg=7CLmQhnB2i8fvxtmu_t4kAY5P_X7VyLhPZG9j6YjHak=-JJI4DBHi9Q-
> 
kVIoqpavNyeLpU9NGoyhoUNlwk7zDFc=sFKjXjkMVWaQ_gcLZiZcjJNmjLm6nvfF9zxge_sO2i0=


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X3RXOSR4RJAYQ2RXCID7XFPCULMWSML3/


[ovirt-users] Re: oVirt localization: you can help!

2020-06-22 Thread Jiří Sléžka
On 6/17/20 8:48 AM, Sandro Bonazzola wrote:
> Hi,
> if you have some time, here is a chance for helping oVirt project
> without requiring development skills.
> Help us localize oVirt to your natural language!
> oVirt Engine needs some work for:
> (see 
> https://zanata.phx.ovirt.org/iteration/view/ovirt-engine/ovirt-4.4?dswid=6591 
> )

I would like to, but it looks like I can't login (generate activation
mail) using Fedora or Google account...

An unexpected error has occurred. Please report this problem with
details of what you were attempting.

but I don't have a Jira account to do that...

Cheers,

Jiri

> 
>   * Czech 33.14% Translated
>   * German 98.87% Translated
>   * Italian 80.02% Translated
>   * Korean 99.72% Translated
>   * Portuguese (Brazil) 99.72% Translated
>   * Russian 30.73% Translated
>   * Spanish 98.87% Translated
> 
> ovirt-engine-ui-extensions:
> (see 
> https://zanata.phx.ovirt.org/iteration/view/ovirt-engine-ui-extensions/1.1?dswid=-5868
>  ) 
> 
>   * Czech 22.82%Translated
>   * German 89.9% Translated
>   * Italian 15.62% Translated
>   * Korean 99.17% Translated
>   * Portuguese (Brazil) 89.9% Translated
>   * Spanish 89.9% Translated
> 
> ovirt-web-ui
> (see https://zanata.phx.ovirt.org/iteration/view/ovirt-web-ui/1.6?dswid=-9733 
> )
> 
>   * Czech 24.8% Translated
>   * German 98.33% Translated
>   * Italian 12.85% Translated
>   * Korean 98.51% Translated
>   * Portuguese (Brazil) 98.33% Translated
> 
> If you're trying to help and  you encounter any issue with the
> translation platform let us know and we'll help you solve them.
> 
> Thanks,
> -- 
> 
> Sandro Bonazzola
> 
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> 
> Red Hat EMEA 
> 
> sbona...@redhat.com    
> 
>  
> 
> **
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NYX3AXWEPCNBKKS6O65KNXXAP2UWRWG6/
> 




smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CUNTEMHLASGSM7ALUVMCSVU6PIGORQQP/


[ovirt-users] Re: Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-22 Thread C Williams
Strahil,

I have downgraded the target. The copy from the problematic volume to the
target is going on now.
Once I have the data copied, I might downgrade the problematic volume's
Gluster to 6.5.
At that point I might reattach the original ovirt domain and see if it will
work again.
But the copy is going on right now.

Thank You For Your Help !

On Mon, Jun 22, 2020 at 10:52 AM Strahil Nikolov 
wrote:

> You should ensure  that in the storage domain tab, the old  storage is not
> visible.
>
> I  still wander why yoiu didn't try to downgrade first.
>
> Best Regards,
> Strahil Nikolov
>
> На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams 
> написа:
> >Strahil,
> >
> >The GLCL3 storage domain was detached prior to attempting to add the
> >new
> >storage domain.
> >
> >Should I also "Remove" it ?
> >
> >Thank You For Your Help !
> >
> >-- Forwarded message -
> >From: Strahil Nikolov 
> >Date: Mon, Jun 22, 2020 at 12:50 AM
> >Subject: Re: [ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain
> >To: C Williams 
> >Cc: users 
> >
> >
> >You can't add the new volume as it contains the same data (UUID) as the
> >old
> >one , thus you need to detach the old one before adding the new one -
> >of
> >course this means downtime for all VMs on that storage.
> >
> >As you see , downgrading is more simpler. For me v6.5 was working,
> >while
> >anything above (6.6+) was causing complete lockdown.  Also v7.0 was
> >working, but it's supported  in oVirt 4.4.
> >
> >Best Regards,
> >Strahil Nikolov
> >
> >На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams
> >
> >написа:
> >>Another question
> >>
> >>What version could I downgrade to safely ? I am at 6.9 .
> >>
> >>Thank You For Your Help !!
> >>
> >>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
> >>
> >>wrote:
> >>
> >>> You are definitely reading it wrong.
> >>> 1. I didn't create a new storage  domain ontop this new volume.
> >>> 2. I used cli
> >>>
> >>> Something like this  (in your case it should be 'replica 3'):
> >>> gluster volume create newvol replica 3 arbiter 1
> >>ovirt1:/new/brick/path
> >>> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
> >>> gluster volume start newvol
> >>>
> >>> #Detach oldvol from ovirt
> >>>
> >>> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
> >>> mount -t glusterfs ovirt1:/newvol /mnt/newvol
> >>> cp -a /mnt/oldvol/* /mnt/newvol
> >>>
> >>> #Add only newvol as a storage domain in oVirt
> >>> #Import VMs
> >>>
> >>> I still think that you should downgrade your gluster packages!!!
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
> >>
> >>> написа:
> >>> >Strahil,
> >>> >
> >>> >It sounds like  you used a "System Managed Volume" for the new
> >>storage
> >>> >domain,is that correct?
> >>> >
> >>> >Thank You For Your Help !
> >>> >
> >>> >On Sun, Jun 21, 2020 at 5:40 PM C Williams
> >
> >>> >wrote:
> >>> >
> >>> >> Strahil,
> >>> >>
> >>> >> So you made another oVirt Storage Domain -- then copied the data
> >>with
> >>> >cp
> >>> >> -a from the failed volume to the new volume.
> >>> >>
> >>> >> At the root of the volume there will be the old domain folder id
> >>ex
> >>> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
> >>> >>  in my case. Did that cause issues with making the new domain
> >>since
> >>> >it is
> >>> >> the same folder id as the old one ?
> >>> >>
> >>> >> Thank You For Your Help !
> >>> >>
> >>> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
> >>> >
> >>> >> wrote:
> >>> >>
> >>> >>> In my situation I had  only the ovirt nodes.
> >>> >>>
> >>> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
> >>> >
> >>> >>> написа:
> >>> >>> >Strahil,
> >>> >>> >
> >>> >>> >So should I make the target volume on 3 bricks which do not
> >have
> >>> >ovirt
> >>> >>> >--
> >>> >>> >just gluster ? In other words (3) Centos 7 hosts ?
> >>> >>> >
> >>> >>> >Thank You For Your Help !
> >>> >>> >
> >>> >>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
> >>> >
> >>> >>> >wrote:
> >>> >>> >
> >>> >>> >> I  created a fresh volume  (which is not an ovirt sgorage
> >>> >domain),
> >>> >>> >set
> >>> >>> >> the  original  storage  domain  in maintenance and detached
> >>it.
> >>> >>> >> Then I 'cp  -a ' the data from the old to the new volume.
> >>Next,
> >>> >I
> >>> >>> >just
> >>> >>> >> added  the  new  storage domain (the old  one  was  a  kind
> >>of a
> >>> >>> >> 'backup')  - pointing to the  new  volume  name.
> >>> >>> >>
> >>> >>> >> If  you  observe  issues ,  I would  recommend  you  to
> >>downgrade
> >>> >>> >> gluster  packages one node  at  a  time  . Then you might be
> >>able
> >>> >to
> >>> >>> >> restore  your  oVirt operations.
> >>> >>> >>
> >>> >>> >> Best  Regards,
> >>> >>> >> Strahil  Nikolov
> >>> >>> >>
> >>> >>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
> >>> >>> >
> >>> >>> >> написа:
> >>> >>> >> >Strahil,
> >>> >>> >> >
> >>> >>> >> >Thanks for the follow up !
> >>> >>> >> >
> >>> >>> >> >How did you copy the data to another 

[ovirt-users] Re: KeyCloak Integration

2020-06-22 Thread Artur Socha
On Mon, 2020-06-22 at 15:14 +0200, Artur Socha wrote:
> Anton,
> I managed to re-create the issue on my local environment. 
> Previously I tested it against Keycloak 8.0.1 with users loaded from LDAP.
> Currently I have users/groups created via Keycloak management panel. I need to
> investigate it further which of the two changes is the root cause (it works
> fine with the old setup)

One more update:  it seems the issue is keycloak version related. Trying to
figure out what was changed and how it affected engine sso integration.
Latest keycloak version I tested and verified that works is 9.0.3. Perhaps it
could be possible for you to use it until we fully support 10.0.x ? Artur 
> Artur
> On Mon, 2020-06-22 at 11:05 +, Anton Louw wrote:
> > 
> > 
> > 
> > Hi Artur,
> >  
> > Great, thanks a lot! 
> > 
> >  
> > 
> > 
> >   
> >   
> >   
> > Anton Louw
> >  
> >   
> > Cloud Engineer: Storage and Virtualization at Vox
> > 
> >   
> >   
> > 
> >   
> >   
> > T:  087 805  | D: 087 805 1572
> > M: N/A
> > 
> > E: anton.l...@voxtelecom.co.za
> > A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> > 
> > www.vox.co.za
> >   
> > 
> > 
> > 
> > 
> > 
> >   
> >   
> >   
> >   
> >   
> > 
> > 
> > 
> >   
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > From: Artur Socha 
> > 
> > 
> > Sent: 22 June 2020 11:23
> > 
> > To: Anton Louw ; users@ovirt.org
> > 
> > Cc: Stephen Hutchinson 
> > 
> > Subject: Re: [ovirt-users] KeyCloak Integration
> > 
> > 
> >  
> > 
> > Hi Anton,
> > 
> > 
> > Thanks for the specs. I have create BZ issue for tracking:
> > 
> > 
> > https://bugzilla.redhat.com/show_bug.cgi?id=1849569
> > 
> > 
> > Feel free to add comments/change it when needed.
> > 
> > 
> >  
> > 
> > 
> > Artur
> > 
> > 
> >  
> > 
> > 
> > On Fri, 2020-06-19 at 10:57 +, Anton Louw wrote:
> > 
> > >  
> > > Hi Artur,
> > >  
> > > Please see below:
> > >  
> > > ovirt-engine.noarch 4.3.10.4-1.el7@ovirt-4.3
> > > ovirt-engine-extension-aaa-misc.noarch  1.0.4-1.el7   @ovirt-4.3
> > > mod_auth_openidc.x86_64 1.8.8-5.el7   @base
> > >  
> > > [root@virt ~]# cat /etc/*elease
> > > CentOS Linux release 7.7.1908 (Core)
> > > NAME="CentOS Linux"
> > > VERSION="7 (Core)"
> > > ID="centos"
> > > ID_LIKE="rhel fedora"
> > > VERSION_ID="7"
> > > PRETTY_NAME="CentOS Linux 7 (Core)"
> > > ANSI_COLOR="0;31"
> > > CPE_NAME="cpe:/o:centos:centos:7"
> > > HOME_URL="https://www.centos.org/;
> > > BUG_REPORT_URL="https://bugs.centos.org/;
> > >  
> > > CENTOS_MANTISBT_PROJECT="CentOS-7"
> > > CENTOS_MANTISBT_PROJECT_VERSION="7"
> > > REDHAT_SUPPORT_PRODUCT="centos"
> > > REDHAT_SUPPORT_PRODUCT_VERSION="7"
> > >  
> > > CentOS Linux release 7.7.1908 (Core)
> > > CentOS Linux release 7.7.1908 (Core)
> > >  
> > > KeyCloak – 
> > >  
> > > 
> > > 
> > > 
> > > 
> > > 
> > > Server Version
> > > 
> > > 
> > > 
> > > 10.0.1
> > > 
> > > 
> > > 
> > > 
> > >  
> > > Thanks a lot for your help Artur. Please let me know if you need anything
> > > else.
> > >  
> > > 
> > > 
> > > From: Artur Socha 
> > > 
> > > 
> > > Sent: 19 June 2020 12:39
> > > 
> > > To: Anton Louw ;
> > > users@ovirt.org
> > > 
> > > Cc: Stephen Hutchinson 
> > > 
> > > Subject: Re: [ovirt-users] KeyCloak Integration
> > > 
> > > 
> > >  
> > > 
> > > On Fri, 2020-06-19 at 10:21 +, Anton Louw wrote:
> > > 
> > > >  
> > > > Yes I didn’t get to the OVN part yet, as I first wanted to test the if
> > > > the token can be obtained.
> > > >  
> > > > This is the first time we are testing KeyCloak in any environment, so we
> > > > have never been able to obtain a token for API access.
> > > >  
> > > 
> > > 
> > > Please post the exact versions of:
> > > 
> > > 
> > > - ovirt-engine* :   
> > > 
> > > 
> > > yum list --installed | grep ovirt-engine 
> > > 
> > > 
> > > yum list --intalled | grep
> > > ovirt-engine-extension-aaa-misc
> > > 
> > > 
> > > yum list --installed | grep
> > > mod_auth_openidc
> > > 
> > > 
> > > - keycloak
> > > 
> > > 
> > > - OS
> > > 
> > > 
> > > cat /etc/*elease
> > > 
> > > 
> > >  
> > > 
> > > 
> > > I'll submit a bug ... which, most likely, I will assign to myself anyway
> > > :)
> > > 
> > > 
> > >  
> > > 
> > > 
> > > Artur
> > > 
> > > 
> > >  
> > >  
> > > 
> > > 
> > > 
> > > 
> > > Anton Louw
> > > 
> > > 
> > > 
> > > 
> > > Cloud Engineer: Storage and Virtualization
> > >  at Vox
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > T:
> > >  087 805  |
> > > D: 087 805 1572
> > > 
> > > M: N/A
> > > 
> > > E:
> > > anton.l...@voxtelecom.co.za
> > > 
> > > A: Rutherford Estate,
> > >  1 Scott Street, Waverley, Johannesburg
> > > 
> > > www.vox.co.za
> > > 
> > > 
> > > 
> > > 
> > >  
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > >  
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 

[ovirt-users] Re: Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-22 Thread Strahil Nikolov via Users
You should ensure  that in the storage domain tab, the old  storage is not 
visible.

I  still wander why yoiu didn't try to downgrade first.

Best Regards,
Strahil Nikolov

На 22 юни 2020 г. 13:58:33 GMT+03:00, C Williams  
написа:
>Strahil,
>
>The GLCL3 storage domain was detached prior to attempting to add the
>new
>storage domain.
>
>Should I also "Remove" it ?
>
>Thank You For Your Help !
>
>-- Forwarded message -
>From: Strahil Nikolov 
>Date: Mon, Jun 22, 2020 at 12:50 AM
>Subject: Re: [ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain
>To: C Williams 
>Cc: users 
>
>
>You can't add the new volume as it contains the same data (UUID) as the
>old
>one , thus you need to detach the old one before adding the new one -
>of
>course this means downtime for all VMs on that storage.
>
>As you see , downgrading is more simpler. For me v6.5 was working,
>while
>anything above (6.6+) was causing complete lockdown.  Also v7.0 was
>working, but it's supported  in oVirt 4.4.
>
>Best Regards,
>Strahil Nikolov
>
>На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams
>
>написа:
>>Another question
>>
>>What version could I downgrade to safely ? I am at 6.9 .
>>
>>Thank You For Your Help !!
>>
>>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
>>
>>wrote:
>>
>>> You are definitely reading it wrong.
>>> 1. I didn't create a new storage  domain ontop this new volume.
>>> 2. I used cli
>>>
>>> Something like this  (in your case it should be 'replica 3'):
>>> gluster volume create newvol replica 3 arbiter 1
>>ovirt1:/new/brick/path
>>> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
>>> gluster volume start newvol
>>>
>>> #Detach oldvol from ovirt
>>>
>>> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
>>> mount -t glusterfs ovirt1:/newvol /mnt/newvol
>>> cp -a /mnt/oldvol/* /mnt/newvol
>>>
>>> #Add only newvol as a storage domain in oVirt
>>> #Import VMs
>>>
>>> I still think that you should downgrade your gluster packages!!!
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
>>
>>> написа:
>>> >Strahil,
>>> >
>>> >It sounds like  you used a "System Managed Volume" for the new
>>storage
>>> >domain,is that correct?
>>> >
>>> >Thank You For Your Help !
>>> >
>>> >On Sun, Jun 21, 2020 at 5:40 PM C Williams
>
>>> >wrote:
>>> >
>>> >> Strahil,
>>> >>
>>> >> So you made another oVirt Storage Domain -- then copied the data
>>with
>>> >cp
>>> >> -a from the failed volume to the new volume.
>>> >>
>>> >> At the root of the volume there will be the old domain folder id
>>ex
>>> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
>>> >>  in my case. Did that cause issues with making the new domain
>>since
>>> >it is
>>> >> the same folder id as the old one ?
>>> >>
>>> >> Thank You For Your Help !
>>> >>
>>> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
>>> >
>>> >> wrote:
>>> >>
>>> >>> In my situation I had  only the ovirt nodes.
>>> >>>
>>> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
>>> >
>>> >>> написа:
>>> >>> >Strahil,
>>> >>> >
>>> >>> >So should I make the target volume on 3 bricks which do not
>have
>>> >ovirt
>>> >>> >--
>>> >>> >just gluster ? In other words (3) Centos 7 hosts ?
>>> >>> >
>>> >>> >Thank You For Your Help !
>>> >>> >
>>> >>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
>>> >
>>> >>> >wrote:
>>> >>> >
>>> >>> >> I  created a fresh volume  (which is not an ovirt sgorage
>>> >domain),
>>> >>> >set
>>> >>> >> the  original  storage  domain  in maintenance and detached
>>it.
>>> >>> >> Then I 'cp  -a ' the data from the old to the new volume.
>>Next,
>>> >I
>>> >>> >just
>>> >>> >> added  the  new  storage domain (the old  one  was  a  kind
>>of a
>>> >>> >> 'backup')  - pointing to the  new  volume  name.
>>> >>> >>
>>> >>> >> If  you  observe  issues ,  I would  recommend  you  to
>>downgrade
>>> >>> >> gluster  packages one node  at  a  time  . Then you might be
>>able
>>> >to
>>> >>> >> restore  your  oVirt operations.
>>> >>> >>
>>> >>> >> Best  Regards,
>>> >>> >> Strahil  Nikolov
>>> >>> >>
>>> >>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
>>> >>> >
>>> >>> >> написа:
>>> >>> >> >Strahil,
>>> >>> >> >
>>> >>> >> >Thanks for the follow up !
>>> >>> >> >
>>> >>> >> >How did you copy the data to another volume ?
>>> >>> >> >
>>> >>> >> >I have set up another storage domain GLCLNEW1 with a new
>>volume
>>> >>> >imgnew1
>>> >>> >> >.
>>> >>> >> >How would you copy all of the data from the problematic
>>domain
>>> >GLCL3
>>> >>> >> >with
>>> >>> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve
>>all
>>> >the
>>> >>> >VMs,
>>> >>> >> >VM
>>> >>> >> >disks, settings, etc. ?
>>> >>> >> >
>>> >>> >> >Remember all of the regular ovirt disk copy, disk move, VM
>>> >export
>>> >>> >> >tools
>>> >>> >> >are failing and my VMs and disks are trapped on domain GLCL3
>>and
>>> >>> >volume
>>> >>> >> >images3 right now.
>>> >>> >> >
>>> >>> >> >Please let me know
>>> >>> >> >
>>> >>> >> >Thank You For Your Help !
>>> >>> >> >

[ovirt-users] Re: Hosted engine deployment doesn't add the host(s) to the /etc/hosts engine, even if hostname doesn't get resolved by DNS server

2020-06-22 Thread Yedidyah Bar David
On Mon, Jun 22, 2020 at 4:55 PM Gilboa Davara  wrote:
>
> On Mon, Jun 22, 2020 at 9:12 AM Yedidyah Bar David  wrote:
> >
> > I agree. Would you like to open a bug about this? It's not always easy
> > to know the root cause for the failure, nor to pass it through the
> > various components until it can reach the end-user.
>
> Sure. Happy to.
> Against which bugzilla component?

Perhaps first have a look at:

https://bugzilla.redhat.com/show_bug.cgi?id=1816002

and decide if to add a comment there, or create new bug (same
product/component).

>
> >
> >
> > Not sure it must abort. In principle, you could have supplied custom
> > ansible code to be ran inside the appliance, to add the items yourself
> > to /etc/hosts, or in theory it can also happen that you configured stuff
> > so that the host fails DNS resolution but the engine VM does not.
> >
> > It also asked you:
> >
> > 2020-06-21 10:49:18,562-0400 DEBUG otopi.plugins.otopi.dialog.human
> > dialog.__logString:204 DIALOG:SEND Add lines for the
> > appliance itself and for this host to /etc/hosts on the engine VM?
> > 2020-06-21 10:49:18,562-0400 DEBUG otopi.plugins.otopi.dialog.human
> > dialog.__logString:204 DIALOG:SEND Note: ensuring that
> > this host could resolve the engine VM hostname is still up to you
> > 2020-06-21 10:49:18,563-0400 DEBUG otopi.plugins.otopi.dialog.human
> > dialog.__logString:204 DIALOG:SEND (Yes, No)[No]
> >
> > And you accepted the default 'No'.
> >
> > Perhaps we should change the default to Yes.
>
> I must have missed it.
> In this case:
> A. It is essentially PBKAC.
> B. I believe that given the fact the problem was actually detected by
> the installer early on, I believe the installer should enforce having
> either hosts entry or working DNS setup. (Or at least show a big red
> flashing message saying: "Look, are you sure you want to set up a
> broken hosted engine VM and that cannot possibly resolve the host
> address and will uncertainly fail miserably once we try and deploy the
> hosted engine?")

I agree this makes sense, although as I said, it's not fully certain to fail.
The code emitting this warning is general - it's used both here and in
engine-setup.
I agree that here (in hosted-engine) it's more important.

>
> >
> > Of course - Yes is also a risk - a user not noticing it, then later on
> > changing the DNS, and not understanding why it "does not work"...
>
> Indeed.
>
> > In theory, you can examine the ansible code, and see what (not very
> > many) next steps it should have done if it didn't fail there, and do
> > that yourself (or decide that they are not important). In practice,
> > I'd personally deploy again cleanly, unless this is for a quick test
> > or something.
> >
> > Best regards,
> > --
>
> I'll simply clean up and redeploy.
> Hopefully after suffering a long string of PBKAC and DNS related
> failures, I'll finally have a working setup :)

Good luck!

>
> And again, many thanks for taking the time to assist me.
> I appreciate it!

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JI5KR36RXLYHZNMK3X3MWBNZD2DJ37PZ/


[ovirt-users] Re: ovirt/rhev grafana

2020-06-22 Thread Sandro Bonazzola
Il giorno lun 22 giu 2020 alle ore 09:24 Markus Schaufler <
markus.schauf...@digit-all.at> ha scritto:

> Hi!
>
> I've got an existing Grafana installation and would like to integrate our
> ovirt/rhev infrastructure.
> How could I do that?
>

I would recommend to read
https://blogs.ovirt.org/2018/06/build-ovirt-reports-using-grafana/ .
Please let us know if you intended something different.




>
> Thanks for any advice,
> Markus
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/H72MRXD3JB3OSSTREWVIQURFN4G4MAAT/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6DCMQ35WZSIYTYZFBOYFVMGTL3ZTC6TH/


[ovirt-users] Re: Hosted engine deployment doesn't add the host(s) to the /etc/hosts engine, even if hostname doesn't get resolved by DNS server

2020-06-22 Thread Gilboa Davara
On Mon, Jun 22, 2020 at 9:12 AM Yedidyah Bar David  wrote:
>
> I agree. Would you like to open a bug about this? It's not always easy
> to know the root cause for the failure, nor to pass it through the
> various components until it can reach the end-user.

Sure. Happy to.
Against which bugzilla component?

>
>
> Not sure it must abort. In principle, you could have supplied custom
> ansible code to be ran inside the appliance, to add the items yourself
> to /etc/hosts, or in theory it can also happen that you configured stuff
> so that the host fails DNS resolution but the engine VM does not.
>
> It also asked you:
>
> 2020-06-21 10:49:18,562-0400 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:204 DIALOG:SEND Add lines for the
> appliance itself and for this host to /etc/hosts on the engine VM?
> 2020-06-21 10:49:18,562-0400 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:204 DIALOG:SEND Note: ensuring that
> this host could resolve the engine VM hostname is still up to you
> 2020-06-21 10:49:18,563-0400 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:204 DIALOG:SEND (Yes, No)[No]
>
> And you accepted the default 'No'.
>
> Perhaps we should change the default to Yes.

I must have missed it.
In this case:
A. It is essentially PBKAC.
B. I believe that given the fact the problem was actually detected by
the installer early on, I believe the installer should enforce having
either hosts entry or working DNS setup. (Or at least show a big red
flashing message saying: "Look, are you sure you want to set up a
broken hosted engine VM and that cannot possibly resolve the host
address and will uncertainly fail miserably once we try and deploy the
hosted engine?")

>
> Of course - Yes is also a risk - a user not noticing it, then later on
> changing the DNS, and not understanding why it "does not work"...

Indeed.

> In theory, you can examine the ansible code, and see what (not very
> many) next steps it should have done if it didn't fail there, and do
> that yourself (or decide that they are not important). In practice,
> I'd personally deploy again cleanly, unless this is for a quick test
> or something.
>
> Best regards,
> --

I'll simply clean up and redeploy.
Hopefully after suffering a long string of PBKAC and DNS related
failures, I'll finally have a working setup :)

And again, many thanks for taking the time to assist me.
I appreciate it!

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NHDX3ER4Z6HAA6WQ7GTGBPR4Y2O74P4N/


[ovirt-users] Re: Hosted engine deployment fails consistently when trying to download files.

2020-06-22 Thread Gilboa Davara
On Mon, Jun 22, 2020 at 8:58 AM Yedidyah Bar David  wrote:
>
> On Sun, Jun 21, 2020 at 7:36 PM Gilboa Davara  wrote:
> >
> > On Thu, Jun 18, 2020 at 2:54 PM Yedidyah Bar David  wrote:
> > >
> > > On Thu, Jun 18, 2020 at 2:37 PM Gilboa Davara  wrote:
> > > >
> > > > On Wed, Jun 17, 2020 at 12:35 PM Yedidyah Bar David  
> > > > wrote:
> > > > > > However, when trying to install 4.4 on the test CentOS 8.x (now 8.2
> > > > > > after yesterday release), either manually (via hosted-engine 
> > > > > > --deploy)
> > > > > > or by using cockpit, fails when trying to download packages (see
> > > > > > attached logs) during the hosted engine deployment phase.
> > > > >
> > > > > Right. Didn't check them - I guess it's the same, no?
> > > >
> > > > Most likely you are correct. That said, the console version is more 
> > > > verbose.
> > > >
> > > >
> > > > > > Just to be clear, it is the hosted engine VM (during the deployment
> > > > > > process) that fails to automatically download packages, _not_ the
> > > > > > host.
> > > > >
> > > > > Exactly. That's why I asked you (because the logs do not reveal that)
> > > > > to manually login there and try to install (update) the package, and
> > > > > see what happens, why it failes, etc. Can you please try that? Thanks.
> > > >
> > > > Sadly enough, the failure comes early in the hosted engine deployment
> > > > process, making the VM completely inaccessible.
> > > > While I see qemu-kvm briefly start, it usually dies before I have any
> > > > chance to access it.
> > > >
> > > > Can I somehow prevent hosted-engine --deploy from destroying the
> > > > hosted engine VM, when the deployment fails, giving me access to it?
> > >
> > > This is how it should behave normally, it does not kill the VM.
> > > Perhaps check logs, try to find who/what killed it.
> > >
> > > Anyway: Earlier today I pushed this patch:
> > >
> > > https://gerrit.ovirt.org/109730
> > >
> > > Didn't yet get to try verifying it. Would you like to try? You can get
> > > an RPM from the CI build linked there, or download the patch and apply
> > > it manually (in the "gitweb" link [1]).
> > >
> > > Then, you can do:
> > >
> > > hosted-engine --deploy --ansible-extra-vars=he_offline_deployment=true
> > >
> > > If you try this, please share the results. Thanks!
> > >
> > > [1] 
> > > https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-setup.git;a=commitdiff_plain;h=f77fa8b84ed6d8a74cbe56b95accb1e8131afbb5
>
> Now filed https://bugzilla.redhat.com/1849517 for this.
>
> > >
> > > Best regards,
> > > --
> > > Didi
> > >
> >
> > Good news. I managed to connect to the VM and solve the problem.
>
> Glad to hear that, thanks for the report!
>
> >
> > For some odd reason our primary DNS server had upstream connection
> > issues and all the requests were silently handled by our secondary DNS
> > server.
> > Not sure I understand why, but while the ovirt host did manage to
> > silently spill over to the secondary DNS, the hosted engine, at least
> > during the initial deployment phase (when it still uses the host's
> > dnsmasq), failed to spill over to the secondary DNS server and the
> > deployment failed.
>
> Sounds like a bug in dnsmasq, although I am not sure.
>
> That said, DNS/DHCP are out of scope for oVirt. We simply assume they
> are robust.
>
> In retrospective, what do you think we should have done differently
> to make it easier for you to find (and fix) the problem?
>
> Best regards,
> --
> Didi

In retrospect, the main problem was the non-descriptive error message
generated by DNF (which has nothing to do with the ovirt installer).
That said, this could easily be circumvented by adding a simple
network-test script to the installer playbook.

Then again, the problem was clearly on my side...

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V7R5HVHLY7FH5AXEXDENTCSRG3HJ3M2V/


[ovirt-users] Re: KeyCloak Integration

2020-06-22 Thread Artur Socha
Anton,I managed to re-create the issue on my local environment. Previously I
tested it against Keycloak 8.0.1 with users loaded from LDAP. Currently I have
users/groups created via Keycloak management panel. I need to investigate it
further which of the two changes is the root cause (it works fine with the old
setup)Artur
On Mon, 2020-06-22 at 11:05 +, Anton Louw wrote:
> 
> 
> 
> Hi Artur,
>  
> Great, thanks a lot! 
> 
>  
> 
> 
>   
>   
>   
> Anton Louw
>  
>   
> Cloud Engineer: Storage and Virtualization at Vox
> 
>   
>   
> 
>   
>   
> T:  087 805  | D: 087 805 1572
> M: N/A
> 
> E: anton.l...@voxtelecom.co.za
> A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> 
> www.vox.co.za
>   
> 
> 
> 
> 
> 
>   
>   
>   
>   
>   
> 
> 
> 
>   
> 
> 
> 
> 
> 
> 
> 
> 
> 
> From: Artur Socha 
> 
> 
> Sent: 22 June 2020 11:23
> 
> To: Anton Louw ; users@ovirt.org
> 
> Cc: Stephen Hutchinson 
> 
> Subject: Re: [ovirt-users] KeyCloak Integration
> 
> 
>  
> 
> Hi Anton,
> 
> 
> Thanks for the specs. I have create BZ issue for tracking:
> 
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1849569
> 
> 
> Feel free to add comments/change it when needed.
> 
> 
>  
> 
> 
> Artur
> 
> 
>  
> 
> 
> On Fri, 2020-06-19 at 10:57 +, Anton Louw wrote:
> 
> >  
> > Hi Artur,
> >  
> > Please see below:
> >  
> > ovirt-engine.noarch 4.3.10.4-1.el7@ovirt-4.3
> > ovirt-engine-extension-aaa-misc.noarch  1.0.4-1.el7   @ovirt-4.3
> > mod_auth_openidc.x86_64 1.8.8-5.el7   @base
> >  
> > [root@virt ~]# cat /etc/*elease
> > CentOS Linux release 7.7.1908 (Core)
> > NAME="CentOS Linux"
> > VERSION="7 (Core)"
> > ID="centos"
> > ID_LIKE="rhel fedora"
> > VERSION_ID="7"
> > PRETTY_NAME="CentOS Linux 7 (Core)"
> > ANSI_COLOR="0;31"
> > CPE_NAME="cpe:/o:centos:centos:7"
> > HOME_URL="https://www.centos.org/;
> > BUG_REPORT_URL="https://bugs.centos.org/;
> >  
> > CENTOS_MANTISBT_PROJECT="CentOS-7"
> > CENTOS_MANTISBT_PROJECT_VERSION="7"
> > REDHAT_SUPPORT_PRODUCT="centos"
> > REDHAT_SUPPORT_PRODUCT_VERSION="7"
> >  
> > CentOS Linux release 7.7.1908 (Core)
> > CentOS Linux release 7.7.1908 (Core)
> >  
> > KeyCloak – 
> >  
> > 
> > 
> > 
> > 
> > 
> > Server Version
> > 
> > 
> > 
> > 10.0.1
> > 
> > 
> > 
> > 
> >  
> > Thanks a lot for your help Artur. Please let me know if you need anything
> > else.
> >  
> > 
> > 
> > From: Artur Socha 
> > 
> > 
> > Sent: 19 June 2020 12:39
> > 
> > To: Anton Louw ;
> > users@ovirt.org
> > 
> > Cc: Stephen Hutchinson 
> > 
> > Subject: Re: [ovirt-users] KeyCloak Integration
> > 
> > 
> >  
> > 
> > On Fri, 2020-06-19 at 10:21 +, Anton Louw wrote:
> > 
> > >  
> > > Yes I didn’t get to the OVN part yet, as I first wanted to test the if the
> > > token can be obtained.
> > >  
> > > This is the first time we are testing KeyCloak in any environment, so we
> > > have never been able to obtain a token for API access.
> > >  
> > 
> > 
> > Please post the exact versions of:
> > 
> > 
> > - ovirt-engine* :   
> > 
> > 
> > yum list --installed | grep ovirt-engine 
> > 
> > 
> > yum list --intalled | grep
> > ovirt-engine-extension-aaa-misc
> > 
> > 
> > yum list --installed | grep
> > mod_auth_openidc
> > 
> > 
> > - keycloak
> > 
> > 
> > - OS
> > 
> > 
> > cat /etc/*elease
> > 
> > 
> >  
> > 
> > 
> > I'll submit a bug ... which, most likely, I will assign to myself anyway :)
> > 
> > 
> >  
> > 
> > 
> > Artur
> > 
> > 
> >  
> >  
> > 
> > 
> > 
> > 
> > Anton Louw
> > 
> > 
> > 
> > 
> > Cloud Engineer: Storage and Virtualization
> >  at Vox
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > T:
> >  087 805  |
> > D: 087 805 1572
> > 
> > M: N/A
> > 
> > E:
> > anton.l...@voxtelecom.co.za
> > 
> > A: Rutherford Estate,
> >  1 Scott Street, Waverley, Johannesburg
> > 
> > www.vox.co.za
> > 
> > 
> > 
> > 
> >  
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >  
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >  
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >  
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >  
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >  
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >  
> > 
> > 
> > > Thanks
> > >  
> > > 
> > > 
> > > From: Artur Socha 
> > > 
> > > 
> > > Sent: 19 June 2020 12:16
> > > 
> > > To: Anton Louw ;
> > > users@ovirt.org
> > > 
> > > Cc: Stephen Hutchinson 
> > > 
> > > Subject: Re: [ovirt-users] KeyCloak Integration
> > > 
> > > 
> > >  
> > > 
> > > On Fri, 2020-06-19 at 10:03 +, Anton Louw wrote:
> > > 
> > > >  
> > > > Hi Artur,
> > > >  
> > > > Sure, please see below output:
> > > >  
> > > > [root@virt ~]# curl -vvv -H "Accept:application/json" '
> > > > https://virt.example.co.za/ovirt-engine/sso/oauth/token?grant_type=password=myuser=mypass=ovirt-app-api'
> > > > * About to connect() to 
> > > > 

[ovirt-users] Re: oVirt localization: you can help!

2020-06-22 Thread Douglas Landgraf
Portuguese: 100% translated in three components.

On Wed, Jun 17, 2020 at 2:51 AM Sandro Bonazzola 
wrote:

> Hi,
> if you have some time, here is a chance for helping oVirt project without
> requiring development skills.
> Help us localize oVirt to your natural language!
> oVirt Engine needs some work for: (see
> https://zanata.phx.ovirt.org/iteration/view/ovirt-engine/ovirt-4.4?dswid=6591
>  )
>
>- Czech 33.14% Translated
>- German 98.87% Translated
>- Italian 80.02% Translated
>- Korean 99.72% Translated
>- Portuguese (Brazil) 99.72% Translated
>- Russian 30.73% Translated
>- Spanish 98.87% Translated
>
> ovirt-engine-ui-extensions: (see
> https://zanata.phx.ovirt.org/iteration/view/ovirt-engine-ui-extensions/1.1?dswid=-5868
>  )
>
>- Czech 22.82%Translated
>- German 89.9% Translated
>- Italian 15.62% Translated
>- Korean 99.17% Translated
>- Portuguese (Brazil) 89.9% Translated
>- Spanish 89.9% Translated
>
> ovirt-web-ui (see
> https://zanata.phx.ovirt.org/iteration/view/ovirt-web-ui/1.6?dswid=-9733 )
>
>- Czech 24.8% Translated
>- German 98.33% Translated
>- Italian 12.85% Translated
>- Korean 98.51% Translated
>- Portuguese (Brazil) 98.33% Translated
>
> If you're trying to help and  you encounter any issue with the translation
> platform let us know and we'll help you solve them.
>
> Thanks,
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NYX3AXWEPCNBKKS6O65KNXXAP2UWRWG6/
>


-- 
Cheers
Douglas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KMX3777ZY4AOKPLC4ZCWZ6Y3LXSIEEE/


[ovirt-users] Re: oVirt noVNC

2020-06-22 Thread Anton Louw via Users
Hi Strahil,

Yeah I figured the same. Thank you.


Anton Louw
Cloud Engineer: Storage and Virtualization
__
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.l...@voxtelecom.co.za

www.vox.co.za



From: Strahil Nikolov 
Sent: 22 June 2020 12:34
To: Anton Louw ; Staniforth, Paul 

Cc: users@ovirt.org
Subject: Re: [ovirt-users] Re: oVirt noVNC

It's the client's browser settings , but I think it's easier to either change 
the certificate to something that will be trusted, or to just import it.

Best Regards,
Strahil Nikolov

На 22 юни 2020 г. 11:29:20 GMT+03:00, Anton Louw via Users 
mailto:users@ovirt.org>> написа:
>Hi All,
>
>So I managed to get the noVNC console to work. The last item I am still
>struggling with however is to open the console without importing the CA
>certificate from the below screen:
>
>[cid:image001.png@01D6487F.FC012BF0]
>
>Anybody have any idea what settings I can change for the console to
>display without first importing the CA cert?
>
>Thanks
>
>
>Anton Louw
>Cloud Engineer: Storage and Virtualization
>__
>D: 087 805 1572 | M: N/A
>A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
>anton.l...@voxtelecom.co.za
>
>www.vox.co.za
>
>
>
>From: Anton Louw via Users mailto:users@ovirt.org>>
>Sent: 17 June 2020 06:58
>To: Staniforth, Paul 
>mailto:p.stanifo...@leedsbeckett.ac.uk>>; 
>users@ovirt.org
>Subject: [ovirt-users] Re: oVirt noVNC
>
>
>Hi Paul,
>
>Apologies for the late response.
>
>So we only have a cert bundle on the one that is currently not working.
>The env that is working still has all the default certs.
>
>Thanks
>
>
>Anton Louw
>Cloud Engineer: Storage and Virtualization at Vox
>
>T: 087 805  | D: 087 805 1572
>M: N/A
>E: 
>anton.l...@voxtelecom.co.za>
>A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
>www.vox.co.za>
>
>[F]>
>
>[T]>
>
>[I]>
>
>[L]>
>
>[Y]>
>
>
>From: Staniforth, Paul
>mailto:p.stanifo...@leedsbeckett.ac.uk>>
>Sent: 12 June 2020 13:13
>To: Anton Louw
>mailto:anton.l...@voxtelecom.co.za>>;
>users@ovirt.org>
>Subject: Re: oVirt noVNC
>
>Sorry Anton,
> I'm trying to get a lot of things sorted before the weekend.
>
>This seems the wrong way round, if you follow the documentation you
>shouldn't have the symbolic link on the working system unless you
>replaced the file it was pointing to.
>
>Do you have certificate bundles for both systems?
>
>Regards,
> Paul S.
>
>
>From: Anton Louw
>mailto:anton.l...@voxtelecom.co.za>>
>Sent: 12 June 2020 11:44
>To: Staniforth, Paul
>mailto:p.stanifo...@leedsbeckett.ac.uk>>;
>users@ovirt.org>
>mailto:users@ovirt.org>>
>Subject: RE: oVirt noVNC
>
>
>Caution External Mail: Do not click any links or open any attachments
>unless you trust the sender and know that the content is safe.
>
>
>Thanks Paul,
>
>
>
>So the symbolic link has then been removed, as per the below. Not quite
>sure where to go from here.
>
>
>
>Anton Louw
>Cloud Engineer: Storage and Virtualization at Vox
>
>T: 087 805  | D: 087 805 1572
>M: N/A
>E: 
>anton.l...@voxtelecom.co.za>
>A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg

[ovirt-users] Fwd: Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-22 Thread C Williams
Strahil,

The GLCL3 storage domain was detached prior to attempting to add the new
storage domain.

Should I also "Remove" it ?

Thank You For Your Help !

-- Forwarded message -
From: Strahil Nikolov 
Date: Mon, Jun 22, 2020 at 12:50 AM
Subject: Re: [ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain
To: C Williams 
Cc: users 


You can't add the new volume as it contains the same data (UUID) as the old
one , thus you need to detach the old one before adding the new one - of
course this means downtime for all VMs on that storage.

As you see , downgrading is more simpler. For me v6.5 was working, while
anything above (6.6+) was causing complete lockdown.  Also v7.0 was
working, but it's supported  in oVirt 4.4.

Best Regards,
Strahil Nikolov

На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams 
написа:
>Another question
>
>What version could I downgrade to safely ? I am at 6.9 .
>
>Thank You For Your Help !!
>
>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
>
>wrote:
>
>> You are definitely reading it wrong.
>> 1. I didn't create a new storage  domain ontop this new volume.
>> 2. I used cli
>>
>> Something like this  (in your case it should be 'replica 3'):
>> gluster volume create newvol replica 3 arbiter 1
>ovirt1:/new/brick/path
>> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
>> gluster volume start newvol
>>
>> #Detach oldvol from ovirt
>>
>> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
>> mount -t glusterfs ovirt1:/newvol /mnt/newvol
>> cp -a /mnt/oldvol/* /mnt/newvol
>>
>> #Add only newvol as a storage domain in oVirt
>> #Import VMs
>>
>> I still think that you should downgrade your gluster packages!!!
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
>
>> написа:
>> >Strahil,
>> >
>> >It sounds like  you used a "System Managed Volume" for the new
>storage
>> >domain,is that correct?
>> >
>> >Thank You For Your Help !
>> >
>> >On Sun, Jun 21, 2020 at 5:40 PM C Williams 
>> >wrote:
>> >
>> >> Strahil,
>> >>
>> >> So you made another oVirt Storage Domain -- then copied the data
>with
>> >cp
>> >> -a from the failed volume to the new volume.
>> >>
>> >> At the root of the volume there will be the old domain folder id
>ex
>> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
>> >>  in my case. Did that cause issues with making the new domain
>since
>> >it is
>> >> the same folder id as the old one ?
>> >>
>> >> Thank You For Your Help !
>> >>
>> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
>> >
>> >> wrote:
>> >>
>> >>> In my situation I had  only the ovirt nodes.
>> >>>
>> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
>> >
>> >>> написа:
>> >>> >Strahil,
>> >>> >
>> >>> >So should I make the target volume on 3 bricks which do not have
>> >ovirt
>> >>> >--
>> >>> >just gluster ? In other words (3) Centos 7 hosts ?
>> >>> >
>> >>> >Thank You For Your Help !
>> >>> >
>> >>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
>> >
>> >>> >wrote:
>> >>> >
>> >>> >> I  created a fresh volume  (which is not an ovirt sgorage
>> >domain),
>> >>> >set
>> >>> >> the  original  storage  domain  in maintenance and detached
>it.
>> >>> >> Then I 'cp  -a ' the data from the old to the new volume.
>Next,
>> >I
>> >>> >just
>> >>> >> added  the  new  storage domain (the old  one  was  a  kind
>of a
>> >>> >> 'backup')  - pointing to the  new  volume  name.
>> >>> >>
>> >>> >> If  you  observe  issues ,  I would  recommend  you  to
>downgrade
>> >>> >> gluster  packages one node  at  a  time  . Then you might be
>able
>> >to
>> >>> >> restore  your  oVirt operations.
>> >>> >>
>> >>> >> Best  Regards,
>> >>> >> Strahil  Nikolov
>> >>> >>
>> >>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
>> >>> >
>> >>> >> написа:
>> >>> >> >Strahil,
>> >>> >> >
>> >>> >> >Thanks for the follow up !
>> >>> >> >
>> >>> >> >How did you copy the data to another volume ?
>> >>> >> >
>> >>> >> >I have set up another storage domain GLCLNEW1 with a new
>volume
>> >>> >imgnew1
>> >>> >> >.
>> >>> >> >How would you copy all of the data from the problematic
>domain
>> >GLCL3
>> >>> >> >with
>> >>> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve
>all
>> >the
>> >>> >VMs,
>> >>> >> >VM
>> >>> >> >disks, settings, etc. ?
>> >>> >> >
>> >>> >> >Remember all of the regular ovirt disk copy, disk move, VM
>> >export
>> >>> >> >tools
>> >>> >> >are failing and my VMs and disks are trapped on domain GLCL3
>and
>> >>> >volume
>> >>> >> >images3 right now.
>> >>> >> >
>> >>> >> >Please let me know
>> >>> >> >
>> >>> >> >Thank You For Your Help !
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov
>> >>> >
>> >>> >> >wrote:
>> >>> >> >
>> >>> >> >> Sorry to hear that.
>> >>> >> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't
>> >and I
>> >>> >> >upgraded
>> >>> >> >> to 7.0 .
>> >>> >> >> In the ended ,  I have ended  with creating a  new fresh
>> >volume
>> >>> >and
>> >>> >> >> 

[ovirt-users] Re: oVirt noVNC

2020-06-22 Thread Strahil Nikolov via Users
It's  the client's browser settings ,  but I think it's easier to either change 
the certificate to something that will be trusted,  or  to just import it.

Best Regards,
Strahil  Nikolov

На 22 юни 2020 г. 11:29:20 GMT+03:00, Anton Louw via Users  
написа:
>Hi All,
>
>So I managed to get the noVNC console to work. The last item I am still
>struggling with however is to open the console without importing the CA
>certificate from the below screen:
>
>[cid:image001.png@01D6487F.FC012BF0]
>
>Anybody have any idea what settings I can change for the console to
>display without first importing the CA cert?
>
>Thanks
>
>
>Anton Louw
>Cloud Engineer: Storage and Virtualization
>__
>D: 087 805 1572 | M: N/A
>A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
>anton.l...@voxtelecom.co.za
>
>www.vox.co.za
>
>
>
>From: Anton Louw via Users 
>Sent: 17 June 2020 06:58
>To: Staniforth, Paul ; users@ovirt.org
>Subject: [ovirt-users] Re: oVirt noVNC
>
>
>Hi Paul,
>
>Apologies for the late response.
>
>So we only have a cert bundle on the one that is currently not working.
>The env that is working still has all the default certs.
>
>Thanks
>
>
>Anton Louw
>Cloud Engineer: Storage and Virtualization at Vox
>
>T:  087 805  | D: 087 805 1572
>M: N/A
>E: anton.l...@voxtelecom.co.za
>A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
>www.vox.co.za
>
>[F]
>
>[T]
>
>[I]
>
>[L]
>
>[Y]
>
>
>From: Staniforth, Paul
>mailto:p.stanifo...@leedsbeckett.ac.uk>>
>Sent: 12 June 2020 13:13
>To: Anton Louw
>mailto:anton.l...@voxtelecom.co.za>>;
>users@ovirt.org
>Subject: Re: oVirt noVNC
>
>Sorry Anton,
>   I'm trying to get a lot of things sorted before the weekend.
>
>This seems the wrong way round, if you follow the documentation you
>shouldn't have the symbolic link on the working system unless you
>replaced the file it was pointing to.
>
>Do you have certificate bundles for both systems?
>
>Regards,
>Paul S.
>
>
>From: Anton Louw
>mailto:anton.l...@voxtelecom.co.za>>
>Sent: 12 June 2020 11:44
>To: Staniforth, Paul
>mailto:p.stanifo...@leedsbeckett.ac.uk>>;
>users@ovirt.org
>mailto:users@ovirt.org>>
>Subject: RE: oVirt noVNC
>
>
>Caution External Mail: Do not click any links or open any attachments
>unless you trust the sender and know that the content is safe.
>
>
>Thanks Paul,
>
>
>
>So the symbolic link has then been removed, as per the below. Not quite
>sure where to go from here.
>
>
>
>Anton Louw
>Cloud Engineer: Storage and Virtualization at Vox
>
>T:  087 805  | D: 087 805 1572
>M: N/A
>E: anton.l...@voxtelecom.co.za
>A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
>www.vox.co.za
>
>[F]
>
>[T]
>
>[I]
>
>[L]
>
>[Y]
>
>
>
>From: Staniforth, Paul

[ovirt-users] Re: oVirt install questions

2020-06-22 Thread Strahil Nikolov via Users


На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Users  
написа:
>Thank you and Strahil for your responses.
>They were both very helpful.
>
>> I think a hosted engine installation VM wants 16GB RAM configured
>though I've built older versions with 8GB RAM.
>> For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
>CentOS7 was OK with 1, CentOS6 maybe 512K.
>> The tendency is always increasing with updated OS versions.
>
>Ok, so to clarify my question a little bit, I'm trying to figure out
>how much RAM I would need to reserve for the host OS (or oVirt Node).
>
>I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps
>that would suffice?
>And then as you noted, I would need to plan to give the engine 16GB.

I run my engine on 4Gb or RAM,  but i have no more than 20 VMs, the larger  the 
setup - the more ram for the engine is needed.

>> My minimum ovirt systems were mostly 48GB 16core, but most are now
>128GB 24core or more.
>
>But this is the total amount of physical RAM in your systems, correct?
>Not the amount that you've reserved for your host OS?I've spec'd out
>some hardware, and am probably looking at purchasing two PowerEdge
>R820's to start, each with 64GB RAM and 32 cores.
> 
>
>> While ovirt can do what you would like it to do concerning a single
>user interface, but with what you listed,
>> you're probably better off with just plain KVM/qemu and using
>virt-manager for the interface.
>
>
>Can you migrate VMs from 1 host to another with virt-manager, and can
>you take snapshots?
>If those two features aren't supported by virt-manager, then that would
>almost certainly be a deal breaker.

The engine is just a management layer. KVM/qemu has  that option a long time 
ago,  yet it's some manual work to do it.

>Come to think of it, if I decided to use local storage on each of the
>physical hosts, would I be able to migrate VMs? 
>Or do I *have* to use a Gluster or NFS store for that?
>
For  migration between hosts you need a shared storage. SAN,  Gluster,  CEPH,  
NFS, iSCSI  are  among the ones already supported (CEPH  is a little bit  
experimental).

>‐‐‐ Original Message ‐‐‐
>On Sunday, June 21, 2020 5:58 PM, Edward Berger 
>wrote:
>
>> While ovirt can do what you would like it to do concerning a single
>user interface, but with what you listed,
>> you're probably better off with just plain KVM/qemu and using
>virt-manager for the interface.
>> 
>
>> Those memory/cpu requirements you listed are really tiny and I
>wouldn't recommend even trying ovirt on such challenged systems.
>> I would specify at least 3 hosts for a gluster hyperconverged system,
>and a spare available that can take over if one of the hosts dies.
>> 
>
>> I think a hosted engine installation VM wants 16GB RAM configured
>though I've built older versions with 8GB RAM.
>> For modern VMs CentOS8 x86_64 recommends at least 2GB for a host.
>CentOS7 was OK with 1, CentOS6 maybe 512K.
>> The tendency is always increasing with updated OS versions.
>> 
>
>> My minimum ovirt systems were mostly 48GB 16core, but most are now
>128GB 24core or more.
>> 
>
>> ovirt node ng is a prepackaged installer for an oVirt
>hypervisor/gluster host, with its cockpit interface you can create and
>install the hosted-engine VM for the user and admin web interface.  Its
>very good on enterprise server hardware with lots of RAM,CPU, and
>DISKS. 
>> 
>
>> On Sun, Jun 21, 2020 at 4:34 PM David White via Users
> wrote:
>> 
>
>> > I'm reading through all of the documentation at
>https://ovirt.org/documentation/, and am a bit overwhelmed with all of
>the different options for installing oVirt. 
>> > 
>
>> > My particular use case is that I'm looking for a way to manage VMs
>on multiple physical servers from 1 interface, and be able to deploy
>new VMs (or delete VMs) as necessary. Ideally, it would be great if I
>could move a VM from 1 host to a different host as well, particularly
>in the event that 1 host becomes degraded (bad HDD, bad processor,
>etc...)
>> > 
>
>> > I'm trying to figure out what the difference is between an oVirt
>Node and the oVirt Engine, and how the engine differs from the Manager.
>> > 
>
>> > I get the feeling that `Engine` = `Manager`. Same thing. I further
>think I understand the Engine to be essentially synonymous with a
>vCenter VM for ESXi hosts. Is this correct?
>> > 
>
>> > If so, then what's the difference between the `self-hosted` vs the
>`stand-alone` engines?
>> > 
>
>> > oVirt Engine requirements look to be a minimum of 4GB RAM and
>2CPUs.
>> > oVirt Nodes, on the other hand, require only 2GB RAM.
>> > Is this a requirement just for the physical host, or is that how
>much RAM that each oVirt node process requires? In other words, if I
>have a physical host with 12GB of physical RAM, will I only be able to
>allocate 10GB of that to guest VMs? How much of that should I dedicated
>to the oVirt node processes?
>> > 
>
>> > Can you install the oVirt Engine as a VM onto an existing oVirt
>Node? 

[ovirt-users] Re: KeyCloak Integration

2020-06-22 Thread Artur Socha
Hi Anton,Thanks for the specs. I have create BZ issue for tracking:
https://bugzilla.redhat.com/show_bug.cgi?id=1849569Feel free to add
comments/change it when needed.
Artur
On Fri, 2020-06-19 at 10:57 +, Anton Louw wrote:
> 
> 
> 
> Hi Artur,
>  
> Please see below:
>  
> ovirt-engine.noarch 4.3.10.4-1.el7@ovirt-4.3
> ovirt-engine-extension-aaa-misc.noarch  1.0.4-1.el7   @ovirt-4.3
> mod_auth_openidc.x86_64 1.8.8-5.el7   @base
>  
> [root@virt ~]# cat /etc/*elease
> CentOS Linux release 7.7.1908 (Core)
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/;
> BUG_REPORT_URL="https://bugs.centos.org/;
>  
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION="7"
>  
> CentOS Linux release 7.7.1908 (Core)
> CentOS Linux release 7.7.1908 (Core)
>  
> KeyCloak – 
>  
> 
> 
> 
> 
> 
> Server Version
> 
> 
> 
> 10.0.1
> 
> 
> 
> 
>  
> Thanks a lot for your help Artur. Please let me know if you need anything
> else.
>  
> 
> 
> From: Artur Socha 
> 
> 
> Sent: 19 June 2020 12:39
> 
> To: Anton Louw ; users@ovirt.org
> 
> Cc: Stephen Hutchinson 
> 
> Subject: Re: [ovirt-users] KeyCloak Integration
> 
> 
>  
> 
> On Fri, 2020-06-19 at 10:21 +, Anton Louw wrote:
> 
> >  
> > Yes I didn’t get to the OVN part yet, as I first wanted to test the if the
> > token can be obtained.
> >  
> > This is the first time we are testing KeyCloak in any environment, so we
> > have never been able to obtain a token for API access.
> >  
> 
> 
> Please post the exact versions of:
> 
> 
> - ovirt-engine* :   
> 
> 
> yum list --installed | grep ovirt-engine 
> 
> 
> yum list --intalled | grep
> ovirt-engine-extension-aaa-misc
> 
> 
> yum list --installed | grep
> mod_auth_openidc
> 
> 
> - keycloak
> 
> 
> - OS
> 
> 
> cat /etc/*elease
> 
> 
>  
> 
> 
> I'll submit a bug ... which, most likely, I will assign to myself anyway :)
> 
> 
>  
> 
> 
> Artur
> 
> 
>  
> 
> 
>   
>   
>   
> Anton Louw
>  
>   
> Cloud Engineer: Storage and Virtualization at Vox
> 
>   
>   
> 
>   
>   
> T:  087 805  | D: 087 805 1572
> M: N/A
> 
> E: anton.l...@voxtelecom.co.za
> A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> 
> www.vox.co.za
>   
> 
> 
> 
> 
> 
>   
>   
>   
>   
>   
> 
> 
> 
>   
> 
> 
> 
> 
> 
> 
> 
> 
> 
> > Thanks
> >  
> > 
> > 
> > From: Artur Socha 
> > 
> > 
> > Sent: 19 June 2020 12:16
> > 
> > To: Anton Louw ;
> > users@ovirt.org
> > 
> > Cc: Stephen Hutchinson 
> > 
> > Subject: Re: [ovirt-users] KeyCloak Integration
> > 
> > 
> >  
> > 
> > On Fri, 2020-06-19 at 10:03 +, Anton Louw wrote:
> > 
> > >  
> > > Hi Artur,
> > >  
> > > Sure, please see below output:
> > >  
> > > [root@virt ~]# curl -vvv -H "Accept:application/json" '
> > > https://virt.example.co.za/ovirt-engine/sso/oauth/token?grant_type=password=myuser=mypass=ovirt-app-api'
> > > * About to connect() to 
> > > virt.example.co.za port 443 (#0)
> > > *   Trying 
> > > 127.0.0.1...
> > > * Connected to 
> > > virt.example.co.za (127.0.0.1) port 443 (#0)
> > > * Initializing NSS with certpath: sql:/etc/pki/nssdb
> > > *   CAfile: /etc/pki/tls/certs/ca-bundle.crt
> > >   CApath: none
> > > * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
> > > * Server certificate:
> > > *   subject: CN=*.example.co.za,OU=Domain Control Validated
> > > *   start date: Sep 25 07:46:12 2019 GMT
> > > *   expire date: Oct 02 07:39:01 2020 GMT
> > > *   common name: *example.co.za
> > > *   issuer: CN=Starfield Secure Certificate Authority - G2,OU=
> > > http://certs.starfieldtech.com/repository/,O="Starfield Technologies,
> > >  Inc.",L=Scottsdale,ST=Arizona,C=US
> > > > GET /ovirt-
> > > engine/sso/oauth/token?grant_type=password=myuser=mypass
> > > =ovirt-app-api HTTP/1.1
> > > > User-Agent: curl/7.29.0
> > > > Host: 
> > > virt.example.co.za
> > > > Accept:application/json
> > > > 
> > > < HTTP/1.1 400 Bad Request
> > > < Date: Fri, 19 Jun 2020 09:52:11 GMT
> > > < Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips
> > > < Set-Cookie: locale=en_US; path=/; secure; HttpOnly; Max-Age=2147483647;
> > > Expires=Wed, 07-Jul-2088 13:06:18 GMT
> > > < X-XSS-PROTECTION: 1; MODE=BLOCK
> > > < X-CONTENT-TYPE-OPTIONS: NOSNIFF
> > > < X-FRAME-OPTIONS: SAMEORIGIN
> > > < Content-Type: application/json
> > > < Content-Length: 233
> > > < Connection: close
> > > < 
> > > * Closing connection 0
> > > {"error_code":"access_denied","error":"Cannot authenticate user Invalid
> > > scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-
> > > info:public-authz-search ovirt-ext=token-info:validate ovirt-
> > > 

[ovirt-users] Re: Weird problem starting VMs in oVirt-4.4

2020-06-22 Thread jvdwege
June 17, 2020 8:11 AM, "Krutika Dhananjay"  wrote:

> Yes, so the bug has been fixed upstream and the backports to release-7 and 
> release-8 of gluster
> pending merge. The fix should be available in the next .x release of 
> gluster-7 and 8. Until then
> like Nir suggested, please turn off performance.stat-prefetch on your volumes.
> 
> -Krutika
> On Wed, Jun 17, 2020 at 5:59 AM Nir Soffer  wrote:
> 
>> On Mon, Jun 8, 2020 at 3:10 PM Joop  wrote:
>>> 
>>> On 3-6-2020 14:58, Joop wrote:
 Hi All,
 
 Just had a rather new experience in that starting a VM worked but the
 kernel entered grub2 rescue console due to the fact that something was
 wrong with its virtio-scsi disk.
 The message is Booting from Hard Disk 
 error: ../../grub-core/kern/dl.c:266:invalid arch-independent ELF maginc.
 entering rescue mode...
 
 Doing a CTRL-ALT-Del through the spice console let the VM boot
 correctly. Shutting it down and repeating the procedure I get a disk
 problem everytime. Weird thing is if I activate the BootMenu and then
 straight away start the VM all is OK.
 I don't see any ERROR messages in either vdsm.log, engine.log
 
 If I would have to guess it looks like the disk image isn't connected
 yet when the VM boots but thats weird isn't it?
 
 
>>> As an update to this:
>>> Just had the same problem with a Windows VM but more importantly also
>>> with HostedEngine itself.
>>> On the host did:
>>> hosted-engine --set-maintenance --mode=global
>>> hosted-engine --vm-shutdown
>>> 
>>> Stopped all oVirt related services, cleared all oVirt related logs from
>>> /var/log/..., restarted the host, ran hosted-engine --set-maintenance
>>> --mode=none
>>> Watched /var/spool/mail/root to see the engine coming up. It went to
>>> starting but never came into the Up status.
>>> Set a password and used vncviewer to see the console, see attached
>>> screenschot.
>> 
>> The screenshot "engine.png" show gluster bug we discovered a few weeks ago:
>> https://bugzilla.redhat.com/1823423
>> 
>> Until you get a fixed version, this may fix the issues:
>> 
>> # gluster volume set engine performance.stat-prefetch off
>> 
>> See https://bugzilla.redhat.com/show_bug.cgi?id=1823423#c55.
>> 
>> Krutica, can this bug affect upstream gluster?
>> 
>> Joop, please share the gluster version in your setup.

gluster-ansible-cluster-1.0.0-1.el8.noarch
gluster-ansible-features-1.0.5-6.el8.noarch
gluster-ansible-infra-1.0.4-10.el8.noarch
gluster-ansible-maintenance-1.0.1-3.el8.noarch
gluster-ansible-repositories-1.0.1-2.el8.noarch
gluster-ansible-roles-1.0.5-12.el8.noarch
glusterfs-7.5-1.el8.x86_64
glusterfs-api-7.5-1.el8.x86_64
glusterfs-cli-7.5-1.el8.x86_64
glusterfs-client-xlators-7.5-1.el8.x86_64
glusterfs-events-7.5-1.el8.x86_64
glusterfs-fuse-7.5-1.el8.x86_64
glusterfs-geo-replication-7.5-1.el8.x86_64
glusterfs-libs-7.5-1.el8.x86_64
glusterfs-rdma-7.5-1.el8.x86_64
glusterfs-server-7.5-1.el8.x86_64
libvirt-daemon-driver-storage-gluster-6.0.0-17.el8.x86_64
python3-gluster-7.5-1.el8.x86_64
qemu-kvm-block-gluster-4.2.0-19.el8.x86_64
vdsm-gluster-4.40.16-1.el8.x86_64

I tried friday to import my VMs but had not much success with the stat-prefetch 
off setting. Some VMs imported correctly some didn't and no correlation between 
size or whatever.
Sunday I decided to use features.shard off and I was able to import 1Tb worth 
of VM images without a hitch.
I'm on HCI so I'm assuming that turning sharding off won't be a performance 
problem?
If its fixed I can still move all VM disk to a other storage domain and back 
after turning sharding back on.

Regards,

Joop
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4GRG4LP4BAYQ5LNPG64DA7ZPJWJKXNAX/


[ovirt-users] Re: 4.4.1-rc5: Looking for correct way to configure machine=q35 instead of machine=pc for arch=x86_64

2020-06-22 Thread Sandro Bonazzola
+Asaf Rachmani  , +Evgeny Slutsky
 can
you please investigate?

Il giorno lun 22 giu 2020 alle ore 08:07 Glenn Marcy  ha
scritto:

> Hello, I am hoping for some insight from folks with more hosted engine
> install experience.
>
> When I try to install the hosted engine using the RC5 dist I get the
> following error during the startup
> of the HostedEngine VM:
>
>   XML error: The PCI controller with index='0' must be model='pci-root'
> for this machine type, but model='pcie-root' was found instead
>
> This is due to the HE Domain XML description using
> machine="pc-i440fx-rhel7.6.0".
>
> I've tried to override the default of 'pc' from
> ovirt-ansible-hosted-engine-setup/defaults/main.yml:
>
>   he_emulated_machine: pc
>
> by passing to the ovirt-hosted-engine-setup script a --config-append=file
> parameter where file contains:
>
>   [environment:default]
>   OVEHOSTED_VM/emulatedMachine=str:q35
>
> When the "Create ovirt-hosted-engine-ha run directory" step finishes the
> vm.conf file contains:
>
> cpuType=IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear
> emulatedMachine=q35
>
> At the "Start ovirt-ha-broker service on the host" step that file is
> removed.  When that file appears
> again during the "Check engine VM health" step it now contains:
>
> cpuType=IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear
> emulatedMachine=pc-i440fx-rhel7.6.0
>
> After that the install fails with the metadata from "virsh dumpxml
> HostedEngine" containing:
>
> 1
> XML error: The PCI controller with index='0'
> must be model='pci-root' for this machine type, but model='pcie-root' was
> found instead
>
> Interestingly enough, the HostedEngineLocal VM that is running the
> appliance image has the value I need:
>
>   hvm
>
> Does anyone on the list have any experience with where this needs to be
> overridden?  Somewhere in the
> hosted engine setup or do I need to do something at a deeper level like
> vdsm or libvirt?
>
> Help much appreciated !
>
> Thanks,
> Glenn
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2S5NKX4L7VUYGMEAPKT553IBFAYZZESD/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IM3EQSBHBTORQZM5MAHPOWKYUXIKZCHQ/


[ovirt-users] Re: oVirt install questions

2020-06-22 Thread David White via Users
Thank you and Strahil for your responses.
They were both very helpful.

> I think a hosted engine installation VM wants 16GB RAM configured though I've 
> built older versions with 8GB RAM.
> For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was 
> OK with 1, CentOS6 maybe 512K.
> The tendency is always increasing with updated OS versions.

Ok, so to clarify my question a little bit, I'm trying to figure out how much 
RAM I would need to reserve for the host OS (or oVirt Node).

I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps that would 
suffice?
And then as you noted, I would need to plan to give the engine 16GB.

> My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 
> 24core or more.

But this is the total amount of physical RAM in your systems, correct? Not the 
amount that you've reserved for your host OS?I've spec'd out some hardware, and 
am probably looking at purchasing two PowerEdge R820's to start, each with 64GB 
RAM and 32 cores.
 

> While ovirt can do what you would like it to do concerning a single user 
> interface, but with what you listed,
> you're probably better off with just plain KVM/qemu and using virt-manager 
> for the interface.



Can you migrate VMs from 1 host to another with virt-manager, and can you take 
snapshots?
If those two features aren't supported by virt-manager, then that would almost 
certainly be a deal breaker.

Come to think of it, if I decided to use local storage on each of the physical 
hosts, would I be able to migrate VMs? 
Or do I *have* to use a Gluster or NFS store for that?

‐‐‐ Original Message ‐‐‐
On Sunday, June 21, 2020 5:58 PM, Edward Berger  wrote:

> While ovirt can do what you would like it to do concerning a single user 
> interface, but with what you listed,
> you're probably better off with just plain KVM/qemu and using virt-manager 
> for the interface.
> 

> Those memory/cpu requirements you listed are really tiny and I wouldn't 
> recommend even trying ovirt on such challenged systems.
> I would specify at least 3 hosts for a gluster hyperconverged system, and a 
> spare available that can take over if one of the hosts dies.
> 

> I think a hosted engine installation VM wants 16GB RAM configured though I've 
> built older versions with 8GB RAM.
> For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was 
> OK with 1, CentOS6 maybe 512K.
> The tendency is always increasing with updated OS versions.
> 

> My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 
> 24core or more.
> 

> ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster 
> host, with its cockpit interface you can create and install the hosted-engine 
> VM for the user and admin web interface.  Its very good on enterprise server 
> hardware with lots of RAM,CPU, and DISKS. 
> 

> On Sun, Jun 21, 2020 at 4:34 PM David White via Users  wrote:
> 

> > I'm reading through all of the documentation at 
> > https://ovirt.org/documentation/, and am a bit overwhelmed with all of the 
> > different options for installing oVirt. 
> > 

> > My particular use case is that I'm looking for a way to manage VMs on 
> > multiple physical servers from 1 interface, and be able to deploy new VMs 
> > (or delete VMs) as necessary. Ideally, it would be great if I could move a 
> > VM from 1 host to a different host as well, particularly in the event that 
> > 1 host becomes degraded (bad HDD, bad processor, etc...)
> > 

> > I'm trying to figure out what the difference is between an oVirt Node and 
> > the oVirt Engine, and how the engine differs from the Manager.
> > 

> > I get the feeling that `Engine` = `Manager`. Same thing. I further think I 
> > understand the Engine to be essentially synonymous with a vCenter VM for 
> > ESXi hosts. Is this correct?
> > 

> > If so, then what's the difference between the `self-hosted` vs the 
> > `stand-alone` engines?
> > 

> > oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs.
> > oVirt Nodes, on the other hand, require only 2GB RAM.
> > Is this a requirement just for the physical host, or is that how much RAM 
> > that each oVirt node process requires? In other words, if I have a physical 
> > host with 12GB of physical RAM, will I only be able to allocate 10GB of 
> > that to guest VMs? How much of that should I dedicated to the oVirt node 
> > processes?
> > 

> > Can you install the oVirt Engine as a VM onto an existing oVirt Node? And 
> > then connect that same node to the Engine, once the Engine is installed?
> > 

> > Reading through the documentation, it also sounds like oVirt Engine and 
> > oVirt Node require different versions of RHEL or CentOS.
> > I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, 
> > whereas each Node requires 7.x (although I'll plan to just use the oVirt 
> > Node ISO).
> > 

> > I'm also wondering about storage.
> > I don't really like the idea of using 

[ovirt-users] ovirt/rhev grafana

2020-06-22 Thread Markus Schaufler
Hi!

I've got an existing Grafana installation and would like to integrate our 
ovirt/rhev infrastructure.
How could I do that?

Thanks for any advice,
Markus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H72MRXD3JB3OSSTREWVIQURFN4G4MAAT/


[ovirt-users] Re: HostedEngine install failure (oVirt 4.4 & oVirt Node OS)

2020-06-22 Thread Yedidyah Bar David
On Mon, Jun 22, 2020 at 9:21 AM Ian Easter  wrote:
>
> Hello folks,
>
> Hoping I can trace this down here but kind of "out of the box" error going on 
> here.
>
> Steps:
> - Install oVirt Node OS
> - Manual steps using ovirt-hosted-engine-setup
>
> Might be a step I glanced over so I'm alright with a finger point and RTFM 
> statement.  ;-)
>
> Process fails out:
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using 
> username/password credentials]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]

This  is the task that failed. The deploy process asks the engine
to add the host, then polls the engine waiting until the host appears
as Up.

For you, it timed out.

Please check/share all of the directory
/var/log/ovirt-hosted-engine-setup, to try to find why.

If engine-logs-* inside it is empty, you might try to get the engine
logs from the engine VM itself - you can find its IP address by
searching the logs for "local_vm_ip" and ssh to it from the host.

Best regards,

> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": false, 
> "ovirt_hosts": [{"address": "mtl-hv-14.teve.inc", "affinity
> _labels": [], "auto_numa_status": "unknown", "certificate": {"organization": 
> "teve.inc", "subject": "O=teve.inc,CN=mtl-hv-14.teve.inc"},
>  "cluster": {"href": 
> "/ovirt-engine/api/clusters/ba6daa62-b1a5-11ea-a207-00163e79d98c", "id": 
> "ba6daa62-b1a5-11ea-a207-00163e79d98c"}, "
> comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": 
> {"enabled": false}, "devices": [], "external_network_provider
> _configurations": [], "external_status": "ok", "hardware_information": 
> {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engin
> e/api/hosts/e1399963-f520-4bdc-8ef0-832dc3d99ece", "id": 
> "e1399963-f520-4bdc-8ef0-832dc3d99ece", "katello_errata": [], "kdump_status": 
> "
> unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, 
> "name": "mtl-hv-14.teve.inc", "network_attachments": [], "
> nics": [], "numa_nodes": [], "numa_supported": false, "os": 
> {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, 
> "power_management": {"automatic_pm_enabled": true, "enabled": false, 
> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": 
> $}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": 
> "SHA256:rfVGiGz8dQU7Hr5irbd8N+xBkj94qWThArTokcSqGV8", "port": 22}, 
> $statistics": [], "status": "install_failed", 
> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
> "transparent_hug$_pages": {"enabled": false}, "type": "rhel", 
> "unmanaged_networks": [], "update_available": false, "vgpu_placement": 
> "consolidated"}]}
> ...
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The 
> system may not be provisioned according to the playbook results$ please check 
> the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing 
> ansible-playbook
>
> I've attached the ovirt-hosted-engine-setup log.
>
> Thank you,
> Ian Easter
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXT3G3THYA6MYQARVNNWAN4IWU4YOEAH/



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RF74BWRMW2VC4DPRKEM2CK3AQRSU2HJN/


[ovirt-users] Re: Hosted engine deployment doesn't add the host(s) to the /etc/hosts engine, even if hostname doesn't get resolved by DNS server

2020-06-22 Thread Yedidyah Bar David
On Sun, Jun 21, 2020 at 8:04 PM Gilboa Davara  wrote:
>
> Hello,
>
> Following the previous email, I think I'm hitting an odd problem, not
> sure if it's my mistake or an actual bug.
> 1. Newly deployed 4.4 self-hosted engine on localhost NFS storage on a
> single node.
> 2. Installation failed during the final phase with a non-descriptive
> error message [1].

I agree. Would you like to open a bug about this? It's not always easy
to know the root cause for the failure, nor to pass it through the
various components until it can reach the end-user.

> 3. Log attached.
> 4. Even though the installation seemed to have failed, I managed to
> connect to the ovirt console, and noticed it failed to connect to the
> host.
> 5. SSH into the hosted engine, and noticed it cannot resolve the host 
> hostname.
> 6. Added the missing /etc/hosts entry, restarted the ovirt-engine
> service, and all is green.
> 7. Looking the deployment log, I'm seeing the following message:
> "[WARNING] Failed to resolve gilboa-wx-ovirt.localdomain using DNS, it
> can be resolved only locally", which means the ansible was aware the
> my DNS server doesn't resolve the host hostname, but didn't add the
> missing /etc/hosts entry / and or errored out.

Not sure it must abort. In principle, you could have supplied custom
ansible code to be ran inside the appliance, to add the items yourself
to /etc/hosts, or in theory it can also happen that you configured stuff
so that the host fails DNS resolution but the engine VM does not.

>
> A. Is it a bug, or is it PBKAC?

It also asked you:

2020-06-21 10:49:18,562-0400 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:SEND Add lines for the
appliance itself and for this host to /etc/hosts on the engine VM?
2020-06-21 10:49:18,562-0400 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:SEND Note: ensuring that
this host could resolve the engine VM hostname is still up to you
2020-06-21 10:49:18,563-0400 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:SEND (Yes, No)[No]

And you accepted the default 'No'.

Perhaps we should change the default to Yes.

Of course - Yes is also a risk - a user not noticing it, then later on
changing the DNS, and not understanding why it "does not work"...

> B. What are the chances that I have a working ovirt (test) setup?

In theory, you can examine the ansible code, and see what (not very
many) next steps it should have done if it didn't fail there, and do
that yourself (or decide that they are not important). In practice,
I'd personally deploy again cleanly, unless this is for a quick test
or something.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSD5OI4KW3PRCCVB4BHL5ZM3FN5UP5IE/


[ovirt-users] 4.4.1-rc5: Looking for correct way to configure machine=q35 instead of machine=pc for arch=x86_64

2020-06-22 Thread Glenn Marcy
Hello, I am hoping for some insight from folks with more hosted engine install 
experience.

When I try to install the hosted engine using the RC5 dist I get the following 
error during the startup
of the HostedEngine VM:

  XML error: The PCI controller with index='0' must be model='pci-root' for 
this machine type, but model='pcie-root' was found instead

This is due to the HE Domain XML description using 
machine="pc-i440fx-rhel7.6.0".

I've tried to override the default of 'pc' from 
ovirt-ansible-hosted-engine-setup/defaults/main.yml:

  he_emulated_machine: pc

by passing to the ovirt-hosted-engine-setup script a --config-append=file 
parameter where file contains:

  [environment:default]
  OVEHOSTED_VM/emulatedMachine=str:q35

When the "Create ovirt-hosted-engine-ha run directory" step finishes the 
vm.conf file contains:

cpuType=IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear
emulatedMachine=q35

At the "Start ovirt-ha-broker service on the host" step that file is removed.  
When that file appears
again during the "Check engine VM health" step it now contains:

cpuType=IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear
emulatedMachine=pc-i440fx-rhel7.6.0

After that the install fails with the metadata from "virsh dumpxml 
HostedEngine" containing:

1
XML error: The PCI controller with index='0' must be 
model='pci-root' for this machine type, but model='pcie-root' was found 
instead

Interestingly enough, the HostedEngineLocal VM that is running the appliance 
image has the value I need:

  hvm

Does anyone on the list have any experience with where this needs to be 
overridden?  Somewhere in the
hosted engine setup or do I need to do something at a deeper level like vdsm or 
libvirt?

Help much appreciated !

Thanks,
Glenn
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2S5NKX4L7VUYGMEAPKT553IBFAYZZESD/


[ovirt-users] Re: Hosted engine deployment fails consistently when trying to download files.

2020-06-22 Thread Yedidyah Bar David
On Sun, Jun 21, 2020 at 7:36 PM Gilboa Davara  wrote:
>
> On Thu, Jun 18, 2020 at 2:54 PM Yedidyah Bar David  wrote:
> >
> > On Thu, Jun 18, 2020 at 2:37 PM Gilboa Davara  wrote:
> > >
> > > On Wed, Jun 17, 2020 at 12:35 PM Yedidyah Bar David  
> > > wrote:
> > > > > However, when trying to install 4.4 on the test CentOS 8.x (now 8.2
> > > > > after yesterday release), either manually (via hosted-engine --deploy)
> > > > > or by using cockpit, fails when trying to download packages (see
> > > > > attached logs) during the hosted engine deployment phase.
> > > >
> > > > Right. Didn't check them - I guess it's the same, no?
> > >
> > > Most likely you are correct. That said, the console version is more 
> > > verbose.
> > >
> > >
> > > > > Just to be clear, it is the hosted engine VM (during the deployment
> > > > > process) that fails to automatically download packages, _not_ the
> > > > > host.
> > > >
> > > > Exactly. That's why I asked you (because the logs do not reveal that)
> > > > to manually login there and try to install (update) the package, and
> > > > see what happens, why it failes, etc. Can you please try that? Thanks.
> > >
> > > Sadly enough, the failure comes early in the hosted engine deployment
> > > process, making the VM completely inaccessible.
> > > While I see qemu-kvm briefly start, it usually dies before I have any
> > > chance to access it.
> > >
> > > Can I somehow prevent hosted-engine --deploy from destroying the
> > > hosted engine VM, when the deployment fails, giving me access to it?
> >
> > This is how it should behave normally, it does not kill the VM.
> > Perhaps check logs, try to find who/what killed it.
> >
> > Anyway: Earlier today I pushed this patch:
> >
> > https://gerrit.ovirt.org/109730
> >
> > Didn't yet get to try verifying it. Would you like to try? You can get
> > an RPM from the CI build linked there, or download the patch and apply
> > it manually (in the "gitweb" link [1]).
> >
> > Then, you can do:
> >
> > hosted-engine --deploy --ansible-extra-vars=he_offline_deployment=true
> >
> > If you try this, please share the results. Thanks!
> >
> > [1] 
> > https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-setup.git;a=commitdiff_plain;h=f77fa8b84ed6d8a74cbe56b95accb1e8131afbb5

Now filed https://bugzilla.redhat.com/1849517 for this.

> >
> > Best regards,
> > --
> > Didi
> >
>
> Good news. I managed to connect to the VM and solve the problem.

Glad to hear that, thanks for the report!

>
> For some odd reason our primary DNS server had upstream connection
> issues and all the requests were silently handled by our secondary DNS
> server.
> Not sure I understand why, but while the ovirt host did manage to
> silently spill over to the secondary DNS, the hosted engine, at least
> during the initial deployment phase (when it still uses the host's
> dnsmasq), failed to spill over to the secondary DNS server and the
> deployment failed.

Sounds like a bug in dnsmasq, although I am not sure.

That said, DNS/DHCP are out of scope for oVirt. We simply assume they
are robust.

In retrospective, what do you think we should have done differently
to make it easier for you to find (and fix) the problem?

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N3QAVRLDCWM5N2STG4VP5TJMJIKA57ZY/