[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 3:13 PM Dana Elfassy  wrote:

> Can you shutdown the vms just for the upgrade process?
>
> On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi 
> wrote:
>
>> On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy  wrote:
>>
>>> In order to run the playbooks you would also need the parameters that
>>> they use - some are set on the engine side
>>> Why can't you upgrade the host from the engine admin portal?
>>>
>>>
>> Because when you upgrade a host you put it into maintenance before.
>> And this implies no VMs in execution on it.
>> But if you are in a single host composed environment you cannot
>>
>> Gianluca
>>
>
we are talking about chicken-egg problem.

You say to drive a command form the engine that is a VM that runs inside
the host, but ask to shutdown VMs running on host before...
This is a self hosted engine composed by only one single host.
Normally I would use the procedure from the engine web admin gui, one host
at a time, but with single host it is not possible.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ZU43KQXYJO43CWTDDT733H4YZS4JA2U/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Dana Elfassy
Can you shutdown the vms just for the upgrade process?

On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi 
wrote:

> On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy  wrote:
>
>> In order to run the playbooks you would also need the parameters that
>> they use - some are set on the engine side
>> Why can't you upgrade the host from the engine admin portal?
>>
>>
> Because when you upgrade a host you put it into maintenance before.
> And this implies no VMs in execution on it.
> But if you are in a single host composed environment you cannot
>
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RAYP3AWPDRH7JHDBUJQZWTXRKPLA6DWI/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy  wrote:

> In order to run the playbooks you would also need the parameters that they
> use - some are set on the engine side
> Why can't you upgrade the host from the engine admin portal?
>
>
Because when you upgrade a host you put it into maintenance before.
And this implies no VMs in execution on it.
But if you are in a single host composed environment you cannot

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ROKDXJ7RIJPXOJXRMHSK7DGSYIELGKEN/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Dana Elfassy
In order to run the playbooks you would also need the parameters that they
use - some are set on the engine side
Why can't you upgrade the host from the engine admin portal?

On Mon, Oct 5, 2020 at 12:31 PM Gianluca Cecchi 
wrote:

> On Mon, Oct 5, 2020 at 10:37 AM Dana Elfassy  wrote:
>
>> Yes.
>> The additional main tasks that we execute during host upgrade besides
>> updating packages are certificates related (check for certificates
>> validity, enroll certificates) , configuring advanced virtualization and
>> lvm filter
>> Dana
>>
>>
> Thanks,
> What if I want to directly execute on the host? Any command / pointer to
> run after "yum update"?
> This is to cover a scenario with single host, where I cannot drive it from
> the engine...
>
> Gianluca
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEBWRZOIT4Y2B2T2L2XYFJJV6VQ3VAOF/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Nir Soffer
On Mon, Oct 5, 2020 at 9:06 AM Gianluca Cecchi
 wrote:
>
> On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer  wrote:
>>
>> On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
>> >
>> >
>> >
>> > On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi  
>> > wrote:
>> >>
>> >> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>> >>>
>> >>>
>> >>>
>> >>> Since there wasn't a filter set on the node, the 4.4.2 update added the 
>> >>> default filter for the root-lv pv
>> >>> if there was some filter set before the upgrade, it would not have been 
>> >>> added by the 4.4.2 update.
>> 
>> 
>> >>
>> >> Do you mean that I will get the same problem upgrading from 4.4.2 to an 
>> >> upcoming 4.4.3, as also now I don't have any filter set?
>> >> This would not be desirable
>> >
>> > Once you have got back into 4.4.2, it's recommended to set the lvm filter 
>> > to fit the pvs you use on your node
>> > for the local root pv you can run
>> > # vdsm-tool config-lvm-filter -y
>> > For the gluster bricks you'll need to add their uuids to the filter as 
>> > well.
>>
>> vdsm-tool is expected to add all the devices needed by the mounted
>> logical volumes, so adding devices manually should not be needed.
>>
>> If this does not work please file a bug and include all the info to reproduce
>> the issue.
>>
>
> I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0, 
> but the effect was that no filter at all was set up in lvm.conf, and so the 
> problem I had upgrading to 4.4.2.
> Any way to see related logs for 4.4.0? In which phase of the install of the 
> node itself or of the gluster based wizard is it supposed to run the 
> vdsm-tool command?
>
> Right now in 4.4.2 I get this output, so it seems it works in 4.4.2:
>
> "
> [root@ovirt01 ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Found these mounted logical volumes on this host:
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_data
>   mountpoint:  /gluster_bricks/data
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_engine
>   mountpoint:  /gluster_bricks/engine
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_vmstore
>   mountpoint:  /gluster_bricks/vmstore
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/onn-home
>   mountpoint:  /home
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
>   mountpoint:  /
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-swap
>   mountpoint:  [SWAP]
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-tmp
>   mountpoint:  /tmp
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var
>   mountpoint:  /var
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_crash
>   mountpoint:  /var/crash
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_log
>   mountpoint:  /var/log
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_log_audit
>   mountpoint:  /var/log/audit
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
> This is the recommended LVM filter for this host:
>
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|", 
> "r|.*|" ]
>
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> To use the recommended filter we need to add multipath
> blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
>
>   blacklist {
>   wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V"
>   wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K"
>   }
>
>
> Configure host? [yes,NO]
>
> "
> Does this mean that answering "yes" I will get both lvm and multipath related 
> files modified?

Yes...

>
> Right now my multipath is configured this way:
>
> [root@ovirt01 ~]# grep -v "^#" /etc/multipath.conf | grep -v "^#" | grep 
> -v "^$"
> defaults {
> polling_interval5
> no_path_retry   4
> user_friendly_names no
> flush_on_last_del   yes
> 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 10:37 AM Dana Elfassy  wrote:

> Yes.
> The additional main tasks that we execute during host upgrade besides
> updating packages are certificates related (check for certificates
> validity, enroll certificates) , configuring advanced virtualization and
> lvm filter
> Dana
>
>
Thanks,
What if I want to directly execute on the host? Any command / pointer to
run after "yum update"?
This is to cover a scenario with single host, where I cannot drive it from
the engine...

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6BDBZMEDS4NDV7D6MGLF2C35M4G66V5K/


[ovirt-users] Re: Upgrade Host Compatibility Version

2020-10-05 Thread Dana Elfassy
Hi Jonathan,
What is the error you're getting when you're trying to upgrade using the
upgrade guide?
Can you also attach the engine log? (/var/log/ovirt-engine/engine.log)
Thanks,
Dana

On Sun, Oct 4, 2020 at 6:54 PM jb  wrote:

> Hello everybody,
>
> for some days I upgrade our environment from ovirt 4.3 to 4.4.2 and now
> I stuck on upgrading the host compatibility version. It show there a
> compatibility until version 4.4. The VMs have a limit until version 4.5
> and the cluster shows to that I can upgrade them until 4.5.
>
> On the cluster page is also a Upgrade guide, but this fails when I try
> that one. At the moment I have only one host to upgrade (no hosted
> engine)...
>
> Thanks for helping!
>
> Jonathan
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OULJ2FCU3CLFQHS3PS3YEMHQ2DWKGAFR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AM2VFBEZX4RKFGMDIQSXK4ISNE6LZWYY/


[ovirt-users] SPICE proxy behind nginx reverse proxy

2020-10-05 Thread Colin Coe
Hi all

As per $SUBJECT, I have a SPICE proxy behind a reverse proxy which all
external VDI users are forced to use.

We've only started doing this in the last week or so but I'm not getting
heaps of reports SPICE sessions "freezing".  The testing that I've done
shows that a SPICE session that is unattended for 10-15  minutes hangs or
freezes.  By this I mean that you can't interact with the VM (using SPICE)
via mouse or Keyboard.  Restarting the SPICE session fixes the problem.

Is anyone else doing this?  If so, have you noticed the SPICE session
freezes?

Thanks in advance
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WP5XTSX4QR7ZI7V3QQO47USD5X52VN6Y/


[ovirt-users] Re: oVirt 4.3.10 and ansible default timeouts

2020-10-05 Thread Dana Elfassy
Hi Gianluca,
Yes. We increased the default timeout for the ansible playbooks to support
slower networks and provide more time in cases of larger updates
Best Regards,
Dana

On Fri, Oct 2, 2020 at 2:02 PM Gianluca Cecchi 
wrote:

> Hello,
> in the past when I was in 4.3.7 I used this file:
> /etc/ovirt-engine/engine.conf.d/99-ansible-playbook-timeout.conf
> with
> ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=80
>
> to bypass the default of 30 minutes at that time.
> I updated in steps to 4.3.8 (in February), 4.3.9 (in April) and 4.3.10 (in
> July).
>
> Due to an error on my side I noticed that the task for which I extended
> the ansible timeout (this task I didn't execute anymore in latest months)
> fails with timeout after 80 minutes indeed.
> With the intent to extend again the custom timeout I went to
> /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf, provided
> by ovirt-engine-backend-4.3.10.4-1.el7.noarch
> and actually I see this inside:
> "
> # Specify the ansible-playbook command execution timeout in minutes. It's
> used for any task, which executes
> # AnsibleExecutor class. To change the value permanentaly create a conf
> file 99-ansible-playbook-timeout.conf in
> # /etc/ovirt-engine/engine.conf.d/
> ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=120
> "
> and the file seems the original provided, not tampered:
>
> [root@ovmgr1 test_backup]# rpm -qvV
> ovirt-engine-backend-4.3.10.4-1.el7.noarch | grep virt-engine.conf$
> .
>  /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf
> [root@ovmgr1 test_backup]#
>
> So the question is: has the default intended value passed to 120 in 4.3.10
> (or in any version after 4.3.7)?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WXFVIYC4LYOQHWU4IGKGVLNQ7OFVPENI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C6LEJX36BHHJPZSARFZYOV6SN7LBLHP6/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Dana Elfassy
Yes.
The additional main tasks that we execute during host upgrade besides
updating packages are certificates related (check for certificates
validity, enroll certificates) , configuring advanced virtualization and
lvm filter
Dana

On Mon, Oct 5, 2020 at 9:31 AM Sandro Bonazzola  wrote:

>
>
> Il giorno sab 3 ott 2020 alle ore 14:16 Gianluca Cecchi <
> gianluca.cec...@gmail.com> ha scritto:
>
>> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> ha scritto:
>>>


 On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
 wrote:

> oVirt Node 4.4.2 is now generally available
>
> The oVirt project is pleased to announce the general availability of
> oVirt Node 4.4.2 , as of September 25th, 2020.
>
> This release completes the oVirt 4.4.2 release published on September
> 17th
>

 Thanks fir the news!

 How to prevent hosts entering emergency mode after upgrade from oVirt
> 4.4.1
>
> Due to Bug 1837864
>  - Host enter
> emergency mode after upgrading to latest build
>
> If you have your root file system on a multipath device on your hosts
> you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
> your host entering emergency mode.
>
> In order to prevent this be sure to upgrade oVirt Engine first, then
> on your hosts:
>
>1.
>
>Remove the current lvm filter while still on 4.4.1, or in
>emergency mode (if rebooted).
>2.
>
>Reboot.
>3.
>
>Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
>4.
>
>Run vdsm-tool config-lvm-filter to confirm there is a new filter
>in place.
>5.
>
>Only if not using oVirt Node:
>- run "dracut --force --add multipath” to rebuild initramfs with
>the correct filter configuration
>6.
>
>Reboot.
>
>
>
 What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
 to follow the same steps as if I were in 4.4.1 or what?
 I would like to avoid going through 4.4.1 if possible.

>>>
>>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>>> should work for the same case.
>>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>>
>>> # grep '^filter = ' /etc/lvm/lvm.conf
>>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>>
>>>
>>>
>>>

 Thanks,
 Gianluca

>>>
>>>
>> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
>> and gluster wizard and never update until now.
>> Updated self hosted engine to 4.4.2 without problems.
>>
>> My host doesn't have any filter or global_filter set up in lvm.conf  in
>> 4.4.0.
>>
>> So I update it:
>>
>> [root@ovirt01 vdsm]# yum update
>>
>
> Please use the update command from the engine admin portal.
> The ansible code running from there also performs additional steps other
> than just yum update.
> +Dana Elfassy  can you elaborate on other steps
> performed during the upgrade?
>
>
>
>> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51
>> PM CEST.
>> Dependencies resolved.
>>
>> 
>>  Package ArchitectureVersion
>>   Repository  Size
>>
>> 
>> Installing:
>>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
>>   ovirt-4.4  782 M
>>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>>
>> Transaction Summary
>>
>> 
>> Install  1 Package
>>
>> Total download size: 782 M
>> Is this ok [y/N]: y
>> Downloading Packages:
>> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
>> 145 MB 01:45 ETA
>>
>>
>> 
>> Total   5.3
>> MB/s | 782 MB 02:28
>> Running transaction check
>> Transaction check succeeded.
>> Running transaction test
>> Transaction test succeeded.
>> Running transaction
>>   Preparing:
>>1/1
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>>1/2
>>   Installing   : ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>>1/2
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>>1/2
>>   Obsoleting   :
>> ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch
>>  2/2
>>   

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 8:31 AM Sandro Bonazzola  wrote:

>
>> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
>> and gluster wizard and never update until now.
>> Updated self hosted engine to 4.4.2 without problems.
>>
>> My host doesn't have any filter or global_filter set up in lvm.conf  in
>> 4.4.0.
>>
>> So I update it:
>>
>> [root@ovirt01 vdsm]# yum update
>>
>
> Please use the update command from the engine admin portal.
> The ansible code running from there also performs additional steps other
> than just yum update.
> +Dana Elfassy  can you elaborate on other steps
> performed during the upgrade?
>
>
Yes, in general.
But for single host environments is not possible, at least I think.
Because you are upgrading the host where the engine is running...

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCPWGUVJYERJJK7UY4K22U32L52ZXV5D/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Sandro Bonazzola
Il giorno sab 3 ott 2020 alle ore 14:16 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>> gianluca.cec...@gmail.com> ha scritto:
>>
>>>
>>>
>>> On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
>>> wrote:
>>>
 oVirt Node 4.4.2 is now generally available

 The oVirt project is pleased to announce the general availability of
 oVirt Node 4.4.2 , as of September 25th, 2020.

 This release completes the oVirt 4.4.2 release published on September
 17th

>>>
>>> Thanks fir the news!
>>>
>>> How to prevent hosts entering emergency mode after upgrade from oVirt
 4.4.1

 Due to Bug 1837864
  - Host enter
 emergency mode after upgrading to latest build

 If you have your root file system on a multipath device on your hosts
 you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
 your host entering emergency mode.

 In order to prevent this be sure to upgrade oVirt Engine first, then on
 your hosts:

1.

Remove the current lvm filter while still on 4.4.1, or in emergency
mode (if rebooted).
2.

Reboot.
3.

Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
4.

Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.

Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with
the correct filter configuration
6.

Reboot.



>>> What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
>>> to follow the same steps as if I were in 4.4.1 or what?
>>> I would like to avoid going through 4.4.1 if possible.
>>>
>>
>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>> should work for the same case.
>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>
>> # grep '^filter = ' /etc/lvm/lvm.conf
>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>
>>
>>
>>
>>>
>>> Thanks,
>>> Gianluca
>>>
>>
>>
> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
> and gluster wizard and never update until now.
> Updated self hosted engine to 4.4.2 without problems.
>
> My host doesn't have any filter or global_filter set up in lvm.conf  in
> 4.4.0.
>
> So I update it:
>
> [root@ovirt01 vdsm]# yum update
>

Please use the update command from the engine admin portal.
The ansible code running from there also performs additional steps other
than just yum update.
+Dana Elfassy  can you elaborate on other steps
performed during the upgrade?



> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM
> CEST.
> Dependencies resolved.
>
> 
>  Package ArchitectureVersion
> Repository  Size
>
> 
> Installing:
>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
> ovirt-4.4  782 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>
> Transaction Summary
>
> 
> Install  1 Package
>
> Total download size: 782 M
> Is this ok [y/N]: y
> Downloading Packages:
> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
> 145 MB 01:45 ETA
>
>
> 
> Total   5.3
> MB/s | 782 MB 02:28
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:
>  1/1
>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Installing   : ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Obsoleting   :
> ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch
>  2/2
>   Verifying: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Verifying:
> ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch
>  2/2
> Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch.rpm
>
> Installed:
>   ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>
>
> Complete!
> [root@ovirt01 vdsm]# sync
> [root@ovirt01 vdsm]#
>
> I reboot and I'm proposed 4.4.2 by default with 4.4.0 available too.
> But 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer  wrote:

> On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
> >
> >
> >
> > On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
> >>
> >> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
> >>>
> >>>
> >>>
> >>> Since there wasn't a filter set on the node, the 4.4.2 update added
> the default filter for the root-lv pv
> >>> if there was some filter set before the upgrade, it would not have
> been added by the 4.4.2 update.
> 
> 
> >>
> >> Do you mean that I will get the same problem upgrading from 4.4.2 to an
> upcoming 4.4.3, as also now I don't have any filter set?
> >> This would not be desirable
> >
> > Once you have got back into 4.4.2, it's recommended to set the lvm
> filter to fit the pvs you use on your node
> > for the local root pv you can run
> > # vdsm-tool config-lvm-filter -y
> > For the gluster bricks you'll need to add their uuids to the filter as
> well.
>
> vdsm-tool is expected to add all the devices needed by the mounted
> logical volumes, so adding devices manually should not be needed.
>
> If this does not work please file a bug and include all the info to
> reproduce
> the issue.
>
>
I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0,
but the effect was that no filter at all was set up in lvm.conf, and so the
problem I had upgrading to 4.4.2.
Any way to see related logs for 4.4.0? In which phase of the install of the
node itself or of the gluster based wizard is it supposed to run the
vdsm-tool command?

Right now in 4.4.2 I get this output, so it seems it works in 4.4.2:

"
[root@ovirt01 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_data
  mountpoint:  /gluster_bricks/data
  devices:
/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_engine
  mountpoint:  /gluster_bricks/engine
  devices:
/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_vmstore
  mountpoint:  /gluster_bricks/vmstore
  devices:
/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/onn-home
  mountpoint:  /home
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
  mountpoint:  /
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-swap
  mountpoint:  [SWAP]
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-tmp
  mountpoint:  /tmp
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var
  mountpoint:  /var
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_crash
  mountpoint:  /var/crash
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_log
  mountpoint:  /var/log
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_log_audit
  mountpoint:  /var/log/audit
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

This is the recommended LVM filter for this host:

  filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|",
"a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|",
"r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

To use the recommended filter we need to add multipath
blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:

  blacklist {
  wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V"
  wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K"
  }


Configure host? [yes,NO]

"
Does this mean that answering "yes" I will get both lvm and multipath
related files modified?

Right now my multipath is configured this way:

[root@ovirt01 ~]# grep -v "^#" /etc/multipath.conf | grep -v "^#" |
grep -v "^$"
defaults {
polling_interval5
no_path_retry   4
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds 4096
}
blacklist {
protocol "(scsi:adt|scsi:sbp)"
}
overrides {
  no_path_retry4
}
[root@ovirt01 ~]#

with blacklist explicit on both disks but inside different files:

root disk:
[root@ovirt01 ~]# cat /etc/multipath/conf.d/vdsm_blacklist.conf
# This file is managed by vdsm,