[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-09-12 Thread Yedidyah Bar David
On Mon, Sep 13, 2021 at 1:08 AM Gianluca Cecchi
 wrote:
>
> On Sun, Sep 12, 2021 at 10:35 AM Yedidyah Bar David  wrote:
>>
>>
>> >>
>> >> It was the step I suspect there was a regression for in 4.4.8 (comparing 
>> >> with 4.4.7) when updating the first hosted-engine host during the upgrade 
>> >> flow and retaining its hostname details.
>>
>> What's the regression?
>
>
> I thought that in 4.4.7 there was not this problem if you use the same 
> hostname but with different (real or virtual) hw as the first host during 
> your SHE upgrade from 4.3.10 to 4.4.7.
> But probably it was not so and I didn't remember correctly
>
>>
>> >> I'm going to test with latest async 2 4.4.8 and see if it solves the 
>> >> problem. Otherwise I'm going to open a bugzilla sending the logs.
>>
>> Can you clarify what the bug is?
>
>
> The automatic mgmt of host adding during the "hosted-engine --deploy 
> --restore-from-file=backup.bck" step if you have different hw and you want to 
> recycle your previous hostname.
> In the past it often happened to me to combine upgrades of systems with hw 
> refreshing (with standalone hosts, rhcs clusters, also ovirt/rhv from 4.2 to 
> 4.3 if I remember correctly, ecc.) where you re-use an existing hostname on 
> new hardware
> More than a bug it would be an RFE perhaps

OK, now filed it: https://bugzilla.redhat.com/show_bug.cgi?id=2003515

>
>
>>
>> > As novirt2 and novirt1 (in 4.3) are VMS running on the same hypervisor I 
>> > see that in their hw details I have the same serial number and the usual 
>> > random uuid
>>
>> Same serial number? Doesn't sound right. Any idea why it's the same?
>
>
> My env is nested oVirt and my hypervisors are Vms.
> I notice that in oVirt if you clone a VM it changes the uuid in the clone but 
> it retains the serial number...

OK, understood. Unrelated to current issue, but it might be worth to
optionally change this as well during a clone.

>
>> > Unfortunately I cannot try at the moment the scenario where I deploy the 
>> > new novirt2 on the same virtual hw, because in the first 4.3 install I 
>> > configured the OS disk as 50Gb and with this size 4.4.8 complains about 
>> > insufficient space. And having the snapshot active in preview I cannot 
>> > resize the disk
>> > Eventually I can reinstall 4.3 on an 80Gb disk and try the same, 
>> > maintaining the same hw ... but this would imply that in general I cannot 
>> > upgrade using different hw and reusing the same hostnames correct?
>>
>> Yes. Either reuse a host and keep its name (what we recommend in the
>> upgrade guide) or use a new host and a new name (backup/restore
>> guide).
>>
>> The condition to remove the host prior to adding it is based on
>> unique_id_out, which is set in (see also bz 1642440, 1654697):
>>
>>   - name: Get host unique id
>> shell: |
>>   if [ -e /etc/vdsm/vdsm.id ];
>>   then cat /etc/vdsm/vdsm.id;
>>   elif [ -e /proc/device-tree/system-id ];
>>   then cat /proc/device-tree/system-id; #ppc64le
>>   else dmidecode -s system-uuid;
>>   fi;
>> environment: "{{ he_cmd_lang }}"
>> changed_when: true
>> register: unique_id_out
>>
>> So if you want to "make this work", you can set the uuid (either in
>> your (virtual) BIOS, to affect the /proc value, or in
>> /etc/vdsm/vdsm.id) to match the one of the old host (the one you want
>> to reuse its name). I didn't test this myself, though.
>>
>
> I confirm that I reverted the snapshots of the 2 VMs used as hypervisors 
> taking them again at initial 4.3 status and remade all the steps, but right 
> after the install of the OS of 4.4.8 oVirt node I created /etc/vdsm/vdsm.id 
> inside novirt2 with the old 4.3 value (the file was not there at that moment) 
> and then all the flow went as expected and I was then able to reach the final 
> 4.4.8 async 2 env with both hosts at 4.4.8, cluster and DC updated to 4.6 
> compatibility level and no downtime for the VMs inside the env, because I was 
> able to execute live migration after upgrading the first host

Thanks for the report!

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3PZLOASBPEUSGULEW2TPYG6ET4U7ICYT/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-09-12 Thread Gianluca Cecchi
On Sun, Sep 12, 2021 at 10:35 AM Yedidyah Bar David  wrote:

>
> >>
> >> It was the step I suspect there was a regression for in 4.4.8
> (comparing with 4.4.7) when updating the first hosted-engine host during
> the upgrade flow and retaining its hostname details.
>
> What's the regression?
>

I thought that in 4.4.7 there was not this problem if you use the same
hostname but with different (real or virtual) hw as the first host during
your SHE upgrade from 4.3.10 to 4.4.7.
But probably it was not so and I didn't remember correctly


> >> I'm going to test with latest async 2 4.4.8 and see if it solves the
> problem. Otherwise I'm going to open a bugzilla sending the logs.
>
> Can you clarify what the bug is?
>

The automatic mgmt of host adding during the "hosted-engine --deploy
--restore-from-file=backup.bck" step if you have different hw and you want
to recycle your previous hostname.
In the past it often happened to me to combine upgrades of systems with hw
refreshing (with standalone hosts, rhcs clusters, also ovirt/rhv from 4.2
to 4.3 if I remember correctly, ecc.) where you re-use an existing hostname
on new hardware
More than a bug it would be an RFE perhaps



> > As novirt2 and novirt1 (in 4.3) are VMS running on the same hypervisor I
> see that in their hw details I have the same serial number and the usual
> random uuid
>
> Same serial number? Doesn't sound right. Any idea why it's the same?
>

My env is nested oVirt and my hypervisors are Vms.
I notice that in oVirt if you clone a VM it changes the uuid in the clone
but it retains the serial number...

> Unfortunately I cannot try at the moment the scenario where I deploy the
> new novirt2 on the same virtual hw, because in the first 4.3 install I
> configured the OS disk as 50Gb and with this size 4.4.8 complains about
> insufficient space. And having the snapshot active in preview I cannot
> resize the disk
> > Eventually I can reinstall 4.3 on an 80Gb disk and try the same,
> maintaining the same hw ... but this would imply that in general I cannot
> upgrade using different hw and reusing the same hostnames correct?
>
> Yes. Either reuse a host and keep its name (what we recommend in the
> upgrade guide) or use a new host and a new name (backup/restore
> guide).
>
> The condition to remove the host prior to adding it is based on
> unique_id_out, which is set in (see also bz 1642440, 1654697):
>
>   - name: Get host unique id
> shell: |
>   if [ -e /etc/vdsm/vdsm.id ];
>   then cat /etc/vdsm/vdsm.id;
>   elif [ -e /proc/device-tree/system-id ];
>   then cat /proc/device-tree/system-id; #ppc64le
>   else dmidecode -s system-uuid;
>   fi;
> environment: "{{ he_cmd_lang }}"
> changed_when: true
> register: unique_id_out
>
> So if you want to "make this work", you can set the uuid (either in
> your (virtual) BIOS, to affect the /proc value, or in
> /etc/vdsm/vdsm.id) to match the one of the old host (the one you want
> to reuse its name). I didn't test this myself, though.
>
>
I confirm that I reverted the snapshots of the 2 VMs used as hypervisors
taking them again at initial 4.3 status and remade all the steps, but right
after the install of the OS of 4.4.8 oVirt node I created /etc/vdsm/vdsm.id
inside novirt2 with the old 4.3 value (the file was not there at that
moment) and then all the flow went as expected and I was then able to reach
the final 4.4.8 async 2 env with both hosts at 4.4.8, cluster and DC
updated to 4.6 compatibility level and no downtime for the VMs inside the
env, because I was able to execute live migration after upgrading the first
host


Perhaps, if you do want to open a bug, it should say something like:
> "HE deploy should remove the old host based on its name, and not its
> UUID". However, it's not completely clear to me that this won't
> introduce new regressions.
>
> I admit I didn't completely understand your flow, and especially your
> considerations there. If you think the current behavior prevents an
> important flow, please clarify.
>
> Best regards,
> --
> Didi
>
>
My considerations, as explained at the beginning, were to give the chance
to reuse the hostname (often the oVirt admin is not responsible for
hostname creation/mgmt) if you want to leverage new hw in combination with
the upgrade process.

Thanks for all the other considerations you put into your answer.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXSZHEPQSHTFS3VSB25TUZ7DNFFVBHYB/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-09-12 Thread Yedidyah Bar David
Hi Gianluca,

On Fri, Sep 10, 2021 at 10:04 AM Gianluca Cecchi
 wrote:
>
>
> On Wed, Sep 1, 2021 at 4:26 PM Gianluca Cecchi  
> wrote:
>>
>> On Wed, Sep 1, 2021 at 4:00 PM Yedidyah Bar David  wrote:
>>>
>>>
>>> >
>>> > So I think there was something wrong with my system or probably a 
>>> > regression on this in 4.4.8.
>>> >
>>> > I see these lines in ansible steps of deploy of RHV 4.3 -> 4.4
>>> >
>>> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Remove host used to 
>>> > redeploy]
>>> > [ INFO  ] changed: [localhost -> 192.168.222.170]
>>> >
>>> > possibly this step should remove the host that I'm reinstalling...?
>>>
>>> It should. From the DB, before adding it again. Matches on the uuid
>>> (search the code for unique_id_out if you want the details). Why?
>>>
>>> (I didn't follow all this thread, ignoring the rest for now...)
>>>
>>> Best regards,
>>>
>>>
>>
>> It was the step I suspect there was a regression for in 4.4.8 (comparing 
>> with 4.4.7) when updating the first hosted-engine host during the upgrade 
>> flow and retaining its hostname details.

What's the regression?

>> I'm going to test with latest async 2 4.4.8 and see if it solves the 
>> problem. Otherwise I'm going to open a bugzilla sending the logs.

Can you clarify what the bug is?

>>
>> Gianluca
>
>
> So tried with 4.4.8 async 2 but the same problem
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check actual cluster 
> location]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Enable GlusterFS at cluster 
> level]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Set VLAN ID at datacenter 
> level]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get active list of active 
> firewalld zones]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Configure libvirt firewalld 
> zone]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add host]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host 
> tasks files]
> [ INFO  ] You can now connect to 
> https://novirt2.localdomain.local:6900/ovirt-engine/ and check the status of 
> this host and eventually remediate it, please continue only when the host is 
> listed as 'up'
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file]
> [ INFO  ] changed: [localhost -> localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until 
> /tmp/ansible.wy3ichvk_he_setup_lock is removed, delete it once ready to 
> proceed]
>
> the host keeps remaining as NoNResponsive in local engine and in engine.log 
> the same
>
> 2021-09-10 08:44:51,481+02 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand] 
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-37) [] 
> Command 'GetCapabilitiesAsyncVDSCommand(HostName = novirt2.localdomain.local, 
> VdsIdAndVdsVDSCommandParametersBase:{hostId='ca9ff6f7-5a7c-4168-9632-998c52f76cfa',
>  
> vds='Host[novirt2.localdomain.local,ca9ff6f7-5a7c-4168-9632-998c52f76cfa]'})' 
> execution failed: java.net.ConnectException: Connection refused
>
> so the initial install/config of novirt2 doesn't start
>
> So the scenario is
>
> initial 4.3.10 with 2 hosts (novirt1 and novirt2) and 1 she engine (novmgr)
> iSCSI based storage: hosted_engine storage domain and one data storage domain
>
> This is  nested env so that through snapshots I can try and repeat steps.
> novirt1 and novirt2 are two VMS under one oVirt 4.4 env composed by one 
> single host and an external engine
>
> the steps:
> 1 vm running under novirt1 and hosted engine running under novir2 at the 
> beginning
> . global maintenance
> . stop engine
> . backup
> . shutdown engine vm and scratch novirt2
> actually I simulate scenario where I deploy novirt2 on a new hw, that is a 
> clone of novirt2 VM
> Already tested (in previous version of 4.4.8) that if I go through a 
> different hostname it works

Correct

> As novirt2 and novirt1 (in 4.3) are VMS running on the same hypervisor I see 
> that in their hw details I have the same serial number and the usual random 
> uuid

Same serial number? Doesn't sound right. Any idea why it's the same?

>
> novirt1
> uuid B1EF9AFF-D4BD-41A1-B26E-7DD0CC440963
> serial number 00fa984c-d5a1-e811-906e-00163566263e
>
> novirt2
> uuid D584E962-5461-4FA5-AFFA-DB413E17590C
> serial number  00fa984c-d5a1-e811-906e-00163566263e
>
> and the new novirt2 that has a different uuid, being a clone  has (from 
> dmidecode)
> uuid: 10b9031d-a475-4b41-a134-bad2ede3cf11
> serial Number: 00fa984c-d5a1-e811-906e-00163566263e
>
> Unfortunately I cannot try at the moment the scenario where I deploy the new 
> novirt2 on the same virtual hw, because in the first 4.3 install I 

[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-09-01 Thread Gianluca Cecchi
On Wed, Sep 1, 2021 at 4:00 PM Yedidyah Bar David  wrote:

>
> >
> > So I think there was something wrong with my system or probably a
> regression on this in 4.4.8.
> >
> > I see these lines in ansible steps of deploy of RHV 4.3 -> 4.4
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Remove host used to
> redeploy]
> > [ INFO  ] changed: [localhost -> 192.168.222.170]
> >
> > possibly this step should remove the host that I'm reinstalling...?
>
> It should. From the DB, before adding it again. Matches on the uuid
> (search the code for unique_id_out if you want the details). Why?
>
> (I didn't follow all this thread, ignoring the rest for now...)
>
> Best regards,
>
>
>
It was the step I suspect there was a regression for in 4.4.8 (comparing
with 4.4.7) when updating the first hosted-engine host during the upgrade
flow and retaining its hostname details.
I'm going to test with latest async 2 4.4.8 and see if it solves the
problem. Otherwise I'm going to open a bugzilla sending the logs.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P2CJRLA7INKW2RIH7HRRPUKIFSJ3NH7J/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-09-01 Thread Yedidyah Bar David
On Sun, Aug 29, 2021 at 1:13 PM Gianluca Cecchi
 wrote:
>
> On Fri, Aug 27, 2021 at 7:57 PM Gianluca Cecchi  
> wrote:
>>
>>
>>
>> Next step will be to try removing one of the two hosts while still all env 
>> in 4.3.10 and then take backup of engine and then install the second as 
>> 4.4.8 and see if it goes ok.
>> I'm going to revert the 4.3.10 snapshot consistent env and try...
>>
>
> Actually this step is the same as what performed (new host, because I 
> pre-remove the existing one...).
> And in fact I remember at the beginning of July I made a similar test on the 
> same test env with ovirt 4.4.7 async 2 node iso and didn't have this kind of 
> problem.
>
> In the weekend I had to do similar steps with two different SHE environments 
> with RHV and was able to do as expected, without the fingerprint error and 
> using the same hostname for the first host I redeployed in 4.4.
> Passed from latest RHV 4.3 to latest 4.4, that currently is iso 
> 4.4.7.4-0.20210804 and is based on 4.4.7 async 2 (Hypervisor Image for RHV 
> 4.4.z batch#6 (oVirt-4.4.7-2) Async #1)
>
> So I think there was something wrong with my system or probably a regression 
> on this in 4.4.8.
>
> I see these lines in ansible steps of deploy of RHV 4.3 -> 4.4
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Remove host used to 
> redeploy]
> [ INFO  ] changed: [localhost -> 192.168.222.170]
>
> possibly this step should remove the host that I'm reinstalling...?

It should. From the DB, before adding it again. Matches on the uuid
(search the code for unique_id_out if you want the details). Why?

(I didn't follow all this thread, ignoring the rest for now...)

Best regards,

>
> I will redo the same again on ovirt and eventually open bugzilla with all log 
> files generated.
>
> Gianluca
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAW5C3TS5QCBCJ4PA4BSQOFIZHBWXURA/



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DCOHEL5LK3EADJ2BWJGLN6V7OLZRY4F5/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-08-29 Thread Gianluca Cecchi
On Fri, Aug 27, 2021 at 7:57 PM Gianluca Cecchi 
wrote:

>
>
> Next step will be to try removing one of the two hosts while still all env
> in 4.3.10 and then take backup of engine and then install the second as
> 4.4.8 and see if it goes ok.
> I'm going to revert the 4.3.10 snapshot consistent env and try...
>
>
Actually this step is the same as what performed (new host, because I
pre-remove the existing one...).
And in fact I remember at the beginning of July I made a similar test on
the same test env with ovirt 4.4.7 async 2 node iso and didn't have this
kind of problem.

In the weekend I had to do similar steps with two different SHE
environments with RHV and was able to do as expected, without the
fingerprint error and using the same hostname for the first host I
redeployed in 4.4.
Passed from latest RHV 4.3 to latest 4.4, that currently is iso
4.4.7.4-0.20210804 and is based on 4.4.7 async 2 (Hypervisor Image for RHV
4.4.z batch#6 (oVirt-4.4.7-2) Async #1)

So I think there was something wrong with my system or probably a
regression on this in 4.4.8.

I see these lines in ansible steps of deploy of RHV 4.3 -> 4.4

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Remove host used to
redeploy]
[ INFO  ] changed: [localhost -> 192.168.222.170]

possibly this step should remove the host that I'm reinstalling...?

I will redo the same again on ovirt and eventually open bugzilla with all
log files generated.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAW5C3TS5QCBCJ4PA4BSQOFIZHBWXURA/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-08-27 Thread Gianluca Cecchi
On Fri, Aug 27, 2021 at 4:38 PM Gianluca Cecchi 
wrote:

> On Fri, Aug 27, 2021 at 4:10 PM Gianluca Cecchi 
> wrote:
>
>>
>>
>> no mention about ssh fingerprint reissue
>>
>> Any hint on how to do it?
>>
>> Gianluca
>>
>
> I found this link, related to 4.3 to 4.4 for RHV, that seems to somehow
> confirm the need of a "spare" host
> https://www.frangarcia.me/posts/notes-on-upgrading-rhv-43-to-rhv-44/
>
>
OK, so next step tried  has been (thanks Sandro for the input!):
. power down novirt2 where hosted-engine deploy was stuck (and related
still local vm), scratching it
. install the same host but with name novirt3.localdomain.local and
different ip
. run the
hosted-engine --deploy --restore-from-file=backup.bck
. now all goes ok and novirt3 has been added to engine and novirt1 results
as up, while novirt2 nonresponsive (it doesn't exist any more...)
all flow completes

[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local
storage-pool 84f8abd5-31ec-4c62-8130-521bb55c41e6]
[ INFO  ] changed: [localhost]
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20210827182404.conf'
[ INFO  ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Hosted Engine successfully deployed
[ INFO  ] Other hosted-engine hosts have to be reinstalled in order to
update their storage configuration. From the engine, host by host, please
set maintenance mode and then click on reinstall button ensuring you choose
DEPLOY in hosted engine tab.
[ INFO  ] Please note that the engine VM ssh keys have changed. Please
remove the engine VM entry in ssh known_hosts on your clients.
[root@novirt3 ~]#

Next step will be to try removing one of the two hosts while still all env
in 4.3.10 and then take backup of engine and then install the second as
4.4.8 and see if it goes ok.
I'm going to revert the 4.3.10 snapshot consistent env and try...

Problems so far before the next test:

. After the host deploy of the first 4.4 host (novirt3) I see that still
the current config results with novirt1 as the SPM and old hosted engine
storage as the master domain (I have iSCSI based SHE)
--> all as expected?

. the detach of the old engine storage gives at gui events
OVFs update was ignored - nothing to update for storage domain
'hosted_storage_old_20210827T173854'
Aug 27, 2021, 6:36:03 PM
Storage Domain hosted_storage_old_20210827T173854 (Data Center Default) was
deactivated and has moved to 'Preparing for maintenance' until it will no
longer be accessed by any Host of the Data Center.
8/27/216:36:11 PM
and the task seems completed, but continues to remain with the lock so I
cannot indeed deactivate (even after novirt1 put into maintenance to be
updated)

It didn't complain that the old hosted engine storage was master and
apparently switched another domain (DATA in my case) to master role I
don't know if related or not

. one not required network was not setup automatically on the new host (in
practice in this lab I only have ovirtmgmt and this network...), so when I
tried to live migrate VM of next host to be updated (novirt1) I got no
hosts available due to that
I went into setup host networks of novirt3 and added the network and then
all went fine: I was able to live migrate and install/add the next hosted
engine host (that is the last in my case, no ordinary hosts). Also on the
second host I had to go through the setup host networks button.

BTW: I was then able to update cluster and DC from 4.3 to 4.6 and
shutdown/boot the VM after that
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKZ3MMFJYP5S2I3ACNKKB2OJKNNECW45/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-08-27 Thread Gianluca Cecchi
On Fri, Aug 27, 2021 at 4:10 PM Gianluca Cecchi 
wrote:

>
>
> no mention about ssh fingerprint reissue
>
> Any hint on how to do it?
>
> Gianluca
>

I found this link, related to 4.3 to 4.4 for RHV, that seems to somehow
confirm the need of a "spare" host
https://www.frangarcia.me/posts/notes-on-upgrading-rhv-43-to-rhv-44/

I don't know if the author reads the ML.
But this goes completely against what seems to be present inside the RHV
docs, where, as described in my previous post:
"
If you decide to use a new host, you must assign a unique name to the new
host and then add it to the existing cluster before you begin the upgrade
procedure.
"

And also the oVirt docs that are substantially based on RHV ones:
https://www.ovirt.org/documentation/upgrade_guide/index.html#SHE_Upgrading_from_4-3
"
When upgrading oVirt Engine, it is recommended that you use one of the
existing hosts. If you decide to use a new host, you must assign a unique
name to the new host and then add it to the existing cluster before you
begin the upgrade procedure.
. . .

It is recommended that you use one of the existing hosts. If you decide to
use a new host, you must assign a unique name to the new host and then add
it to the existing cluster before you begin the upgrade procedure.
"

So I would have in this case the same problem related to wrong fingerprint,
or I should be forced to copy fingerprint before scratching the host and
reuse it (if supported passing from node in version 7 to node in version 8
of the OS).
It seems strange this problem didn't arise before...

the phrase:
"
The upgraded host with the 4.4 self-hosted engine reports that HA mode is
active,...
"

lets think that the host name remains consistent with a pre-existing one
and as a reinstall is mandatory (7-->8) I don't see how it could work...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XSZOJLM5BCDYPK57CEET5QA4TSYCLKTY/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-08-27 Thread Gianluca Cecchi
On Fri, Aug 27, 2021 at 3:25 PM Sandro Bonazzola 
wrote:

> Gianluca, after reinstalling the host with 4.4.8 ISO, did you update the
> ssh fingerprint of the fresh install within the ovirt engine? I'm assuming
> you didn't remove the host before reinstalling it and you didn't re-attach
> it to the engine after the upgrade.
>
>
No, I didn't do it... I would expect oVirt to manage it...
Can I do it now?
If I try to do I get:

Error while executing action: Cannot switch Host to Maintenance mode.
Host still has running VMs on it and is in Non Responsive state.

because the temporary engine is on it

It seems in 4.2 -> 4.3 update it was not necessary...

Also, for example on RHV guides I see:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/upgrade_guide/she_upgrading_from_4-3

"
When upgrading Red Hat Virtualization Manager, it is recommended that you
use one of the existing hosts. If you decide to use a new host, you must
assign a unique name to the new host and then add it to the existing
cluster before you begin the upgrade procedure.
"

and
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/upgrade_guide/upgrading_the_manager_to_4-4_4-3_she
"
Install RHVH 4.4 or Red Hat Enterprise Linux 8.2 or later on the existing
node currently running the Manager virtual machine to use it as the
self-hosted engine deployment host. See Installing the Self-hosted Engine
Deployment Host for more information.
Note

It is recommended that you use one of the existing hosts. If you decide to
use a new host, you must assign a unique name to the new host and then add
it to the existing cluster before you begin the upgrade procedure.

. . .

-

The deployment script automatically disables global maintenance mode and
calls the HA agent to start the self-hosted engine virtual machine. The
upgraded host with the 4.4 self-hosted engine reports that HA mode is
active, but the other hosts report that global maintenance mode is still
enabled as they are still connected to the old self-hosted engine storage.
- Detach the storage domain that hosts the Manager 4.3 machine. For
details, see Detaching a Storage Domain from a Data Center

in the *Administration Guide*.
"

no mention about ssh fingerprint reissue

Any hint on how to do it?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N26KCVJGXW3QQXE7YYCGLXWFPVKRN5PG/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-08-27 Thread Sandro Bonazzola
Gianluca, after reinstalling the host with 4.4.8 ISO, did you update the
ssh fingerprint of the fresh install within the ovirt engine? I'm assuming
you didn't remove the host before reinstalling it and you didn't re-attach
it to the engine after the upgrade.

Il giorno ven 27 ago 2021 alle ore 11:27 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

>
>
> On Wed, Aug 25, 2021 at 4:34 PM Gianluca Cecchi 
> wrote:
>
>> file /var/log/messages of novirt2
>>
>> https://drive.google.com/file/d/1hMcLeF3okJizLX4Gxj3jTG5bAPaAAFfK/view?usp=sharing
>>
>> Gianluca
>>
>>
> Same problem with 4.4.8 async 1.
>
> I'm deploying/restoring from novirt2 and the other host (still in 4.3.10)
> is novirt1.
>
> I arrive at
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check actual cluster
> location]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Enable GlusterFS at
> cluster level]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Set VLAN ID at
> datacenter level]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get active list of
> active firewalld zones]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Configure libvirt
> firewalld zone]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add host]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host
> tasks files]
> [ INFO  ] You can now connect to
> https://novirt2.localdomain.local:6900/ovirt-engine/ and check the status
> of this host and eventually remediate it, please continue only when the
> host is listed as 'up'
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock
> file]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until
> /tmp/ansible.5f702qq5_he_setup_lock is removed, delete it once ready to
> proceed]
>
> But then I'm able to connect to local engine web admin UI and novirt1
> results up while novirt2 not responsive.
>
> Every 3 seconds iinside engine.log I see these 3 lines
>
> 2021-08-27 11:05:54,065+02 INFO
>  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
> [] Connecting to novirt2.localdomain.local/172.19.0.232
> 2021-08-27 11:05:54,067+02 ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) []
> Unable to RefreshCapabilities: ConnectException: Connection refused
> 2021-08-27 11:05:54,068+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) []
> Command 'GetCapabilitiesAsyncVDSCommand(HostName =
> novirt2.localdomain.local,
> VdsIdAndVdsVDSCommandParametersBase:{hostId='ca9ff6f7-5a7c-4168-9632-998c52f76cfa',
> vds='Host[novirt2.localdomain.local,ca9ff6f7-5a7c-4168-9632-998c52f76cfa]'})'
> execution failed: java.net.ConnectException: Connection refused
>
>
> Can anyone telling what I can check?
> Did you test SHE upgrade from 4.3.10 to 4.4.8 in your check flows?
>
> BTW: I'm not using DNS but entries in /etc/hosts
>
> Thanks,
> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TLPZINBZRS3TSVCFHH25VFUI3JJICKET/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CUTN6SH5UIRYTGU7LRLD6I2LHCNM3DKY/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-08-27 Thread Gianluca Cecchi
On Wed, Aug 25, 2021 at 4:34 PM Gianluca Cecchi 
wrote:

> file /var/log/messages of novirt2
>
> https://drive.google.com/file/d/1hMcLeF3okJizLX4Gxj3jTG5bAPaAAFfK/view?usp=sharing
>
> Gianluca
>
>
Same problem with 4.4.8 async 1.

I'm deploying/restoring from novirt2 and the other host (still in 4.3.10)
is novirt1.

I arrive at

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check actual cluster
location]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Enable GlusterFS at
cluster level]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Set VLAN ID at datacenter
level]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get active list of active
firewalld zones]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Configure libvirt
firewalld zone]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host
tasks files]
[ INFO  ] You can now connect to
https://novirt2.localdomain.local:6900/ovirt-engine/ and check the status
of this host and eventually remediate it, please continue only when the
host is listed as 'up'
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock
file]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until
/tmp/ansible.5f702qq5_he_setup_lock is removed, delete it once ready to
proceed]

But then I'm able to connect to local engine web admin UI and novirt1
results up while novirt2 not responsive.

Every 3 seconds iinside engine.log I see these 3 lines

2021-08-27 11:05:54,065+02 INFO
 [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to novirt2.localdomain.local/172.19.0.232
2021-08-27 11:05:54,067+02 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) []
Unable to RefreshCapabilities: ConnectException: Connection refused
2021-08-27 11:05:54,068+02 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-60) []
Command 'GetCapabilitiesAsyncVDSCommand(HostName =
novirt2.localdomain.local,
VdsIdAndVdsVDSCommandParametersBase:{hostId='ca9ff6f7-5a7c-4168-9632-998c52f76cfa',
vds='Host[novirt2.localdomain.local,ca9ff6f7-5a7c-4168-9632-998c52f76cfa]'})'
execution failed: java.net.ConnectException: Connection refused


Can anyone telling what I can check?
Did you test SHE upgrade from 4.3.10 to 4.4.8 in your check flows?

BTW: I'm not using DNS but entries in /etc/hosts

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TLPZINBZRS3TSVCFHH25VFUI3JJICKET/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-08-25 Thread Gianluca Cecchi
file /var/log/messages of novirt2
https://drive.google.com/file/d/1hMcLeF3okJizLX4Gxj3jTG5bAPaAAFfK/view?usp=sharing

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/266UGL6XDWJARJ6K6ZDAD4UM5G6VYPVM/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-08-25 Thread Gianluca Cecchi
On Wed, Aug 25, 2021 at 2:18 PM Gianluca Cecchi 
wrote:
[snip]

> I selected to pause an d I arrived here with local vm engine completing
>> its setup:
>>
>>  INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add host]
>> [ INFO  ] changed: [localhost]
>> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host
>> tasks files]
>> [ INFO  ] You can now connect to
>> https://novirt2.localdomain.local:6900/ovirt-engine/ and check the
>> status of this host and eventually remediate it, please continue only when
>> the host is listed as 'up'
>> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
>> [ INFO  ] ok: [localhost]
>> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock
>> file]
>> [ INFO  ] changed: [localhost]
>> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until
>> /tmp/ansible.4_o6a2wo_he_setup_lock is removed, delete it once ready to
>> proceed]
>>
>> But connecting t the provided
>> https://novirt2.localdomain.local:6900/ovirt-engine/ url
>> I see that only the still 4.3.10 host results up while novirt2 is not
>> resp[onsive
>>
>>
It is not clear the sense of the phrase above "check the status of this
host and eventually remediate it, please continue only when the host is
listed as 'up'"...
Does it refer to the novirt2 host (that is the first I'm installing while
nnovirt1 is still in 4.3.10 with a VM running), or novirt1?

Because if I go to the engine vm under /var/log/ovirt-engine I see:

 [root@novmgr ovirt-engine]# cd host-deploy/
[root@novmgr host-deploy]# ll
total 348
-rw-r--r--. 1 ovirt ovirt 354888 Aug 25 09:41
ovirt-host-mgmt-ansible-check-20210825094043-novirt1.localdomain.local.log
[root@novmgr host-deploy]#

So there is only the log file related to the deploy of novirt1 (that I see
as up) no log for novirt2.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3E2TSGSQUQQLEO66KJ7S6WS3ZFBP3V2A/


[ovirt-users] Re: problems testing 4.3.10 to 4.4.8 upgrade SHE

2021-08-25 Thread Gianluca Cecchi
On Wed, Aug 25, 2021 at 12:35 PM Gianluca Cecchi 
wrote:

> Hello,
> I'm testing what in object in a test env with novirt1 and novirt2 as hosts.
> First reinstalled host is novirt2
> For this I downloaded the 4.4.8 iso of the node:
>
> https://resources.ovirt.org/pub/ovirt-4.4/iso/ovirt-node-ng-installer/4.4.8-2021081816/el8/ovirt-node-ng-installer-4.4.8-2021081816.el8.iso
>
> before running the restore command for the first scratched node I
> pre-installed the appliance rpm on it and I got:
> ovirt-engine-appliance-4.4-20210818155544.1.el8.x86_64
>
> I selected to pause an d I arrived here with local vm engine completing
> its setup:
>
>  INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Add host]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host
> tasks files]
> [ INFO  ] You can now connect to
> https://novirt2.localdomain.local:6900/ovirt-engine/ and check the status
> of this host and eventually remediate it, please continue only when the
> host is listed as 'up'
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock
> file]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until
> /tmp/ansible.4_o6a2wo_he_setup_lock is removed, delete it once ready to
> proceed]
>
> But connecting t the provided
> https://novirt2.localdomain.local:6900/ovirt-engine/ url
> I see that only the still 4.3.10 host results up while novirt2 is not
> resp[onsive
>
> vm situation:
>
> https://drive.google.com/file/d/1OwHHzK0owU2HWZqvHFaLLbHVvjnBhRRX/view?usp=sharing
>
> storage situation:
>
> https://drive.google.com/file/d/1D-rmlpGsKfRRmYx2avBk_EYCG7XWMXNq/view?usp=sharing
>
> hosts situation:
>
> https://drive.google.com/file/d/1yrmfYF6hJFzKaG54Xk0Rhe2kY-TIcUvA/view?usp=sharing
>
> In engine.log I see
>
> 2021-08-25 09:14:38,548+02 ERROR
> [org.ovirt.engine.core.vdsbroker.HostDevListByCapsVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-4) [5f4541ee] Command
> 'HostDevListByCapsVDSCommand(HostName = novirt2.localdomain.local,
> VdsIdAndVdsVDSCommandParametersBase:{hostId='ca9ff6f7-5a7c-4168-9632-998c52f76cfa',
> vds='Host[novirt2.localdomain.local,ca9ff6f7-5a7c-4168-9632-998c52f76cfa]'})'
> execution failed: java.net.ConnectException: Connection refused
>
> and continuouslly this message...
>
> I also tried to restart vdsmd on novit2 but nothing changed.
>
> Do I have to restart the HA daemons on novirt2?
>
> Any insight?
>
> Thanks
> Gianluca
>


it seems it has not been able to configure networks on
novirt2.localdomain.local, as I see no ovirtmgmt bridge...
During setup it asked for network card and I specified enp1s0 (default
proposed in square brackets was enp2s0)

172.19.0 is for mgmt network and ip of novirt2, 172.24.0 is for iscsi

[root@novirt2 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enp1s0:  mtu 1500 qdisc mq state UP
group default qlen 1000
link/ether 56:6f:bc:9a:00:5b brd ff:ff:ff:ff:ff:ff
inet 172.19.0.232/24 brd 172.19.0.255 scope global noprefixroute enp1s0
   valid_lft forever preferred_lft forever
inet6 fe80::546f:bcff:fe9a:5b/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
3: enp2s0:  mtu 1500 qdisc mq state UP
group default qlen 1000
link/ether 56:6f:bc:9a:00:5c brd ff:ff:ff:ff:ff:ff
inet 172.24.0.232/24 brd 172.24.0.255 scope global noprefixroute enp2s0
   valid_lft forever preferred_lft forever
inet6 fe80::546f:bcff:fe9a:5c/64 scope link noprefixroute
   valid_lft forever preferred_lft forever
4: enp3s0:  mtu 1500 qdisc mq state UP
group default qlen 1000
link/ether 56:6f:bc:9a:00:5d brd ff:ff:ff:ff:ff:ff
6: virbr0:  mtu 1500 qdisc noqueue state
UP group default qlen 1000
link/ether 52:54:00:8b:b3:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.222.1/24 brd 192.168.222.255 scope global virbr0
   valid_lft forever preferred_lft forever
7: vnet0:  mtu 1500 qdisc noqueue master
virbr0 state UNKNOWN group default qlen 1000
link/ether fe:16:3e:78:35:42 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe78:3542/64 scope link
   valid_lft forever preferred_lft forever
[root@novirt2 ~]#

[root@novirt2 network-scripts]# ll
total 12
-rw-r--r--. 1 root root 368 Aug 25 00:43 ifcfg-enp1s0
-rw-r--r--. 1 root root 277 Aug 25 00:51 ifcfg-enp2s0
-rw-r--r--. 1 root root 247 Aug 25 00:43 ifcfg-enp3s0
[root@novirt2 network-scripts]#

the strange thing is that if I go to the temporary manager ip web admin
page and select the host novirt2, network interfaces, setup host networks,
I see eth0, eth1 and eth2 and not enp1s0, enp2s0, enp3s0...
See