[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-06 Thread Martin Perina
Hi Gianluca,

please see my replies inline

On Tue, Oct 6, 2020 at 11:37 AM Gianluca Cecchi 
wrote:

> On Tue, Oct 6, 2020 at 11:25 AM Martin Perina  wrote:
>
>>
>>> You say to drive a command form the engine that is a VM that runs inside
>>> the host, but ask to shutdown VMs running on host before...
>>> This is a self hosted engine composed by only one single host.
>>> Normally I would use the procedure from the engine web admin gui, one
>>> host at a time, but with single host it is not possible.
>>>
>>
>> We have said several times, that it doesn't make sense to use oVirt on a
>> single host system. So you either need to attach 2nd host to your setup
>> (preferred) or shutdown all VMS and run manual upgrade of your host OS
>>
>>
> We who
>

So I've spent the past hour deeply investigating our upstream documentation
and you are right, we don't have any clear requirements about the minimal
number of hosts in upstream oVirt documentation.
But here are the facts:

1. To be able to upgrade a host either from UI/RESTAPI or manually using
SSH, the host always needs to be in Maintenance:

https://www.ovirt.org/documentation/administration_guide/#Updating_a_host_between_minor_releases

2. To perform Reinstall or Enroll certificate of a host, the host needs to
be in Maintenance mode

https://www.ovirt.org/documentation/administration_guide/#Reinstalling_Hosts_admin

3. When host is in Maintenance mode, there are no oVirt managed VMs running
on it

https://www.ovirt.org/documentation/administration_guide/#Moving_a_host_to_maintenance_mode

4. When engine is not running (either stopped or crashed), VMs running on
hypervisor hosts are unaffected (meaning they are running independently on
engine), but they are pretty much "pinned to the host they are running on"
(for example VMs cannot be migrated or started/stopped (of course you can
stop this VM from within) without running engine)

So just using above facts here are logical conclusions:

1. Standalone engine installation with only one hypervisor host
- this means that engine runs on bare metal hosts (for example
engine.domain.com) and single hypervisor host is managed by it (for example
host1.domain.com)
- in this case scenario administrator is able to perform all
maintenance task (even though at the cost that VMs running on hypervisor
need to be stopped before switching to Maintenance mode),
  because engine is running independently on hypervisor

2. Hosted engine installation with one hypervisor hosts
- this means that engine runs as a VM (for example engine.domain.com)
inside a single hypervisor host, which is managed by it (for example
host1.domain.com)
- in this scenario maintenance of the host is very limited:
- you cannot move the host to Maintenance, because hosted engine VM
cannot be migrated outside a host
- you can perform global Maintenance and the probably manually stop
hosted engine VM, but then you don't have engine to be able to perform
maintenance tasks (for example, Upgrade, Reinstall or Enroll certificates)

But in both above use cases you cannot use the biggest oVirt advantage and
that's a shared storage among hypervisor hosts, which allows you to perform
live migration of VMs. And thanks to that feature you can perform
maintenance tasks on the host(s) without interruption in providing VM
services.

*From the above it's obvious that we need to really clearly state that in a
production environment oVirt requires to have at least 2 hypervisor hosts
for full functionality.*

In old times there was the all-in-one setup that was substituted from
> single host HCI
>

All-in-one feature has been deprecated in oVirt 3.6 and fully removed in
oVirt 4.0

> ... developers also put extra efforts to setup the wizard comprising the
> single host scenario.
>

Yes, you are right, you can initially set up oVirt with just a single host,
but it's expected that you are going to add an additional host(s) soon.

Obviously it is aimed at test bed / devel / home environments, not
> production ones.
>

Of course, for development use whatever your want, but for production you
care about your setup, because you want the services your offer to run
smoothly

> Do you want me to send you the list of bugzilla contributed by users using
> single host environments that helped Red Hat to have a better working RHV
> too?
>

It's clearly stated that at least 2 hypervisors are required for hosted
engine or standalone RHV installation:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/rhv_architecture
But as I mentioned above, we have a bug in oVirt documentation, that such
an important requirement is not clearly stated. And this is not a fault of
a community, this is a fault of oVirt maintainers, that we have forgotten
to mention such an important requirement in oVirt documentation and it's
clearly visible, that it caused a confusion to so many users.

But no matter what I 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-06 Thread Gianluca Cecchi
On Tue, Oct 6, 2020 at 11:25 AM Martin Perina  wrote:

>
>> You say to drive a command form the engine that is a VM that runs inside
>> the host, but ask to shutdown VMs running on host before...
>> This is a self hosted engine composed by only one single host.
>> Normally I would use the procedure from the engine web admin gui, one
>> host at a time, but with single host it is not possible.
>>
>
> We have said several times, that it doesn't make sense to use oVirt on a
> single host system. So you either need to attach 2nd host to your setup
> (preferred) or shutdown all VMS and run manual upgrade of your host OS
>
>
We who
In old times there was the all-in-one setup that was substituted from
single host HCI ... developers also put extra efforts to setup the wizard
comprising the single host scenario.
Obviously it is aimed at test bed / devel / home environments, not
production ones.
Do you want me to send you the list of bugzilla contributed by users using
single host environments that helped Red Hat to have a better working RHV
too?

Please think more deeply next time, thanks

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTT5IGNQV3VJMTCQNWKEFRXL45YKSULB/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-06 Thread Martin Perina
On Mon, Oct 5, 2020 at 3:25 PM Gianluca Cecchi 
wrote:

>
>
> On Mon, Oct 5, 2020 at 3:13 PM Dana Elfassy  wrote:
>
>> Can you shutdown the vms just for the upgrade process?
>>
>> On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi 
>> wrote:
>>
>>> On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy 
>>> wrote:
>>>
 In order to run the playbooks you would also need the parameters that
 they use - some are set on the engine side
 Why can't you upgrade the host from the engine admin portal?


>>> Because when you upgrade a host you put it into maintenance before.
>>> And this implies no VMs in execution on it.
>>> But if you are in a single host composed environment you cannot
>>>
>>> Gianluca
>>>
>>
> we are talking about chicken-egg problem.
>
> You say to drive a command form the engine that is a VM that runs inside
> the host, but ask to shutdown VMs running on host before...
> This is a self hosted engine composed by only one single host.
> Normally I would use the procedure from the engine web admin gui, one host
> at a time, but with single host it is not possible.
>

We have said several times, that it doesn't make sense to use oVirt on a
single host system. So you either need to attach 2nd host to your setup
(preferred) or shutdown all VMS and run manual upgrade of your host OS


> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ZU43KQXYJO43CWTDDT733H4YZS4JA2U/
>


-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7EATX7RPVUOAQWKLHYOTSTRVJG4M2O6Q/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 3:13 PM Dana Elfassy  wrote:

> Can you shutdown the vms just for the upgrade process?
>
> On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi 
> wrote:
>
>> On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy  wrote:
>>
>>> In order to run the playbooks you would also need the parameters that
>>> they use - some are set on the engine side
>>> Why can't you upgrade the host from the engine admin portal?
>>>
>>>
>> Because when you upgrade a host you put it into maintenance before.
>> And this implies no VMs in execution on it.
>> But if you are in a single host composed environment you cannot
>>
>> Gianluca
>>
>
we are talking about chicken-egg problem.

You say to drive a command form the engine that is a VM that runs inside
the host, but ask to shutdown VMs running on host before...
This is a self hosted engine composed by only one single host.
Normally I would use the procedure from the engine web admin gui, one host
at a time, but with single host it is not possible.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ZU43KQXYJO43CWTDDT733H4YZS4JA2U/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Dana Elfassy
Can you shutdown the vms just for the upgrade process?

On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi 
wrote:

> On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy  wrote:
>
>> In order to run the playbooks you would also need the parameters that
>> they use - some are set on the engine side
>> Why can't you upgrade the host from the engine admin portal?
>>
>>
> Because when you upgrade a host you put it into maintenance before.
> And this implies no VMs in execution on it.
> But if you are in a single host composed environment you cannot
>
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RAYP3AWPDRH7JHDBUJQZWTXRKPLA6DWI/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy  wrote:

> In order to run the playbooks you would also need the parameters that they
> use - some are set on the engine side
> Why can't you upgrade the host from the engine admin portal?
>
>
Because when you upgrade a host you put it into maintenance before.
And this implies no VMs in execution on it.
But if you are in a single host composed environment you cannot

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ROKDXJ7RIJPXOJXRMHSK7DGSYIELGKEN/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Dana Elfassy
In order to run the playbooks you would also need the parameters that they
use - some are set on the engine side
Why can't you upgrade the host from the engine admin portal?

On Mon, Oct 5, 2020 at 12:31 PM Gianluca Cecchi 
wrote:

> On Mon, Oct 5, 2020 at 10:37 AM Dana Elfassy  wrote:
>
>> Yes.
>> The additional main tasks that we execute during host upgrade besides
>> updating packages are certificates related (check for certificates
>> validity, enroll certificates) , configuring advanced virtualization and
>> lvm filter
>> Dana
>>
>>
> Thanks,
> What if I want to directly execute on the host? Any command / pointer to
> run after "yum update"?
> This is to cover a scenario with single host, where I cannot drive it from
> the engine...
>
> Gianluca
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEBWRZOIT4Y2B2T2L2XYFJJV6VQ3VAOF/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Nir Soffer
On Mon, Oct 5, 2020 at 9:06 AM Gianluca Cecchi
 wrote:
>
> On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer  wrote:
>>
>> On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
>> >
>> >
>> >
>> > On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi  
>> > wrote:
>> >>
>> >> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>> >>>
>> >>>
>> >>>
>> >>> Since there wasn't a filter set on the node, the 4.4.2 update added the 
>> >>> default filter for the root-lv pv
>> >>> if there was some filter set before the upgrade, it would not have been 
>> >>> added by the 4.4.2 update.
>> 
>> 
>> >>
>> >> Do you mean that I will get the same problem upgrading from 4.4.2 to an 
>> >> upcoming 4.4.3, as also now I don't have any filter set?
>> >> This would not be desirable
>> >
>> > Once you have got back into 4.4.2, it's recommended to set the lvm filter 
>> > to fit the pvs you use on your node
>> > for the local root pv you can run
>> > # vdsm-tool config-lvm-filter -y
>> > For the gluster bricks you'll need to add their uuids to the filter as 
>> > well.
>>
>> vdsm-tool is expected to add all the devices needed by the mounted
>> logical volumes, so adding devices manually should not be needed.
>>
>> If this does not work please file a bug and include all the info to reproduce
>> the issue.
>>
>
> I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0, 
> but the effect was that no filter at all was set up in lvm.conf, and so the 
> problem I had upgrading to 4.4.2.
> Any way to see related logs for 4.4.0? In which phase of the install of the 
> node itself or of the gluster based wizard is it supposed to run the 
> vdsm-tool command?
>
> Right now in 4.4.2 I get this output, so it seems it works in 4.4.2:
>
> "
> [root@ovirt01 ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Found these mounted logical volumes on this host:
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_data
>   mountpoint:  /gluster_bricks/data
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_engine
>   mountpoint:  /gluster_bricks/engine
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_vmstore
>   mountpoint:  /gluster_bricks/vmstore
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/onn-home
>   mountpoint:  /home
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
>   mountpoint:  /
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-swap
>   mountpoint:  [SWAP]
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-tmp
>   mountpoint:  /tmp
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var
>   mountpoint:  /var
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_crash
>   mountpoint:  /var/crash
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_log
>   mountpoint:  /var/log
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_log_audit
>   mountpoint:  /var/log/audit
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
> This is the recommended LVM filter for this host:
>
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|", 
> "r|.*|" ]
>
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> To use the recommended filter we need to add multipath
> blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
>
>   blacklist {
>   wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V"
>   wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K"
>   }
>
>
> Configure host? [yes,NO]
>
> "
> Does this mean that answering "yes" I will get both lvm and multipath related 
> files modified?

Yes...

>
> Right now my multipath is configured this way:
>
> [root@ovirt01 ~]# grep -v "^#" /etc/multipath.conf | grep -v "^#" | grep 
> -v "^$"
> defaults {
> polling_interval5
> no_path_retry   4
> user_friendly_names no
> flush_on_last_del   yes
> 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 10:37 AM Dana Elfassy  wrote:

> Yes.
> The additional main tasks that we execute during host upgrade besides
> updating packages are certificates related (check for certificates
> validity, enroll certificates) , configuring advanced virtualization and
> lvm filter
> Dana
>
>
Thanks,
What if I want to directly execute on the host? Any command / pointer to
run after "yum update"?
This is to cover a scenario with single host, where I cannot drive it from
the engine...

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6BDBZMEDS4NDV7D6MGLF2C35M4G66V5K/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Dana Elfassy
Yes.
The additional main tasks that we execute during host upgrade besides
updating packages are certificates related (check for certificates
validity, enroll certificates) , configuring advanced virtualization and
lvm filter
Dana

On Mon, Oct 5, 2020 at 9:31 AM Sandro Bonazzola  wrote:

>
>
> Il giorno sab 3 ott 2020 alle ore 14:16 Gianluca Cecchi <
> gianluca.cec...@gmail.com> ha scritto:
>
>> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> ha scritto:
>>>


 On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
 wrote:

> oVirt Node 4.4.2 is now generally available
>
> The oVirt project is pleased to announce the general availability of
> oVirt Node 4.4.2 , as of September 25th, 2020.
>
> This release completes the oVirt 4.4.2 release published on September
> 17th
>

 Thanks fir the news!

 How to prevent hosts entering emergency mode after upgrade from oVirt
> 4.4.1
>
> Due to Bug 1837864
>  - Host enter
> emergency mode after upgrading to latest build
>
> If you have your root file system on a multipath device on your hosts
> you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
> your host entering emergency mode.
>
> In order to prevent this be sure to upgrade oVirt Engine first, then
> on your hosts:
>
>1.
>
>Remove the current lvm filter while still on 4.4.1, or in
>emergency mode (if rebooted).
>2.
>
>Reboot.
>3.
>
>Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
>4.
>
>Run vdsm-tool config-lvm-filter to confirm there is a new filter
>in place.
>5.
>
>Only if not using oVirt Node:
>- run "dracut --force --add multipath” to rebuild initramfs with
>the correct filter configuration
>6.
>
>Reboot.
>
>
>
 What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
 to follow the same steps as if I were in 4.4.1 or what?
 I would like to avoid going through 4.4.1 if possible.

>>>
>>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>>> should work for the same case.
>>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>>
>>> # grep '^filter = ' /etc/lvm/lvm.conf
>>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>>
>>>
>>>
>>>

 Thanks,
 Gianluca

>>>
>>>
>> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
>> and gluster wizard and never update until now.
>> Updated self hosted engine to 4.4.2 without problems.
>>
>> My host doesn't have any filter or global_filter set up in lvm.conf  in
>> 4.4.0.
>>
>> So I update it:
>>
>> [root@ovirt01 vdsm]# yum update
>>
>
> Please use the update command from the engine admin portal.
> The ansible code running from there also performs additional steps other
> than just yum update.
> +Dana Elfassy  can you elaborate on other steps
> performed during the upgrade?
>
>
>
>> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51
>> PM CEST.
>> Dependencies resolved.
>>
>> 
>>  Package ArchitectureVersion
>>   Repository  Size
>>
>> 
>> Installing:
>>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
>>   ovirt-4.4  782 M
>>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>>
>> Transaction Summary
>>
>> 
>> Install  1 Package
>>
>> Total download size: 782 M
>> Is this ok [y/N]: y
>> Downloading Packages:
>> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
>> 145 MB 01:45 ETA
>>
>>
>> 
>> Total   5.3
>> MB/s | 782 MB 02:28
>> Running transaction check
>> Transaction check succeeded.
>> Running transaction test
>> Transaction test succeeded.
>> Running transaction
>>   Preparing:
>>1/1
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>>1/2
>>   Installing   : ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>>1/2
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>>1/2
>>   Obsoleting   :
>> ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch
>>  2/2
>>   

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 8:31 AM Sandro Bonazzola  wrote:

>
>> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
>> and gluster wizard and never update until now.
>> Updated self hosted engine to 4.4.2 without problems.
>>
>> My host doesn't have any filter or global_filter set up in lvm.conf  in
>> 4.4.0.
>>
>> So I update it:
>>
>> [root@ovirt01 vdsm]# yum update
>>
>
> Please use the update command from the engine admin portal.
> The ansible code running from there also performs additional steps other
> than just yum update.
> +Dana Elfassy  can you elaborate on other steps
> performed during the upgrade?
>
>
Yes, in general.
But for single host environments is not possible, at least I think.
Because you are upgrading the host where the engine is running...

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCPWGUVJYERJJK7UY4K22U32L52ZXV5D/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Sandro Bonazzola
Il giorno sab 3 ott 2020 alle ore 14:16 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>> gianluca.cec...@gmail.com> ha scritto:
>>
>>>
>>>
>>> On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
>>> wrote:
>>>
 oVirt Node 4.4.2 is now generally available

 The oVirt project is pleased to announce the general availability of
 oVirt Node 4.4.2 , as of September 25th, 2020.

 This release completes the oVirt 4.4.2 release published on September
 17th

>>>
>>> Thanks fir the news!
>>>
>>> How to prevent hosts entering emergency mode after upgrade from oVirt
 4.4.1

 Due to Bug 1837864
  - Host enter
 emergency mode after upgrading to latest build

 If you have your root file system on a multipath device on your hosts
 you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
 your host entering emergency mode.

 In order to prevent this be sure to upgrade oVirt Engine first, then on
 your hosts:

1.

Remove the current lvm filter while still on 4.4.1, or in emergency
mode (if rebooted).
2.

Reboot.
3.

Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
4.

Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.

Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with
the correct filter configuration
6.

Reboot.



>>> What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
>>> to follow the same steps as if I were in 4.4.1 or what?
>>> I would like to avoid going through 4.4.1 if possible.
>>>
>>
>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>> should work for the same case.
>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>
>> # grep '^filter = ' /etc/lvm/lvm.conf
>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>
>>
>>
>>
>>>
>>> Thanks,
>>> Gianluca
>>>
>>
>>
> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
> and gluster wizard and never update until now.
> Updated self hosted engine to 4.4.2 without problems.
>
> My host doesn't have any filter or global_filter set up in lvm.conf  in
> 4.4.0.
>
> So I update it:
>
> [root@ovirt01 vdsm]# yum update
>

Please use the update command from the engine admin portal.
The ansible code running from there also performs additional steps other
than just yum update.
+Dana Elfassy  can you elaborate on other steps
performed during the upgrade?



> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM
> CEST.
> Dependencies resolved.
>
> 
>  Package ArchitectureVersion
> Repository  Size
>
> 
> Installing:
>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
> ovirt-4.4  782 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>
> Transaction Summary
>
> 
> Install  1 Package
>
> Total download size: 782 M
> Is this ok [y/N]: y
> Downloading Packages:
> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
> 145 MB 01:45 ETA
>
>
> 
> Total   5.3
> MB/s | 782 MB 02:28
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:
>  1/1
>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Installing   : ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Obsoleting   :
> ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch
>  2/2
>   Verifying: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Verifying:
> ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch
>  2/2
> Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch.rpm
>
> Installed:
>   ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>
>
> Complete!
> [root@ovirt01 vdsm]# sync
> [root@ovirt01 vdsm]#
>
> I reboot and I'm proposed 4.4.2 by default with 4.4.0 available too.
> But 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer  wrote:

> On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
> >
> >
> >
> > On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
> >>
> >> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
> >>>
> >>>
> >>>
> >>> Since there wasn't a filter set on the node, the 4.4.2 update added
> the default filter for the root-lv pv
> >>> if there was some filter set before the upgrade, it would not have
> been added by the 4.4.2 update.
> 
> 
> >>
> >> Do you mean that I will get the same problem upgrading from 4.4.2 to an
> upcoming 4.4.3, as also now I don't have any filter set?
> >> This would not be desirable
> >
> > Once you have got back into 4.4.2, it's recommended to set the lvm
> filter to fit the pvs you use on your node
> > for the local root pv you can run
> > # vdsm-tool config-lvm-filter -y
> > For the gluster bricks you'll need to add their uuids to the filter as
> well.
>
> vdsm-tool is expected to add all the devices needed by the mounted
> logical volumes, so adding devices manually should not be needed.
>
> If this does not work please file a bug and include all the info to
> reproduce
> the issue.
>
>
I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0,
but the effect was that no filter at all was set up in lvm.conf, and so the
problem I had upgrading to 4.4.2.
Any way to see related logs for 4.4.0? In which phase of the install of the
node itself or of the gluster based wizard is it supposed to run the
vdsm-tool command?

Right now in 4.4.2 I get this output, so it seems it works in 4.4.2:

"
[root@ovirt01 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_data
  mountpoint:  /gluster_bricks/data
  devices:
/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_engine
  mountpoint:  /gluster_bricks/engine
  devices:
/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_vmstore
  mountpoint:  /gluster_bricks/vmstore
  devices:
/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/onn-home
  mountpoint:  /home
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
  mountpoint:  /
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-swap
  mountpoint:  [SWAP]
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-tmp
  mountpoint:  /tmp
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var
  mountpoint:  /var
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_crash
  mountpoint:  /var/crash
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_log
  mountpoint:  /var/log
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_log_audit
  mountpoint:  /var/log/audit
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

This is the recommended LVM filter for this host:

  filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|",
"a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|",
"r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

To use the recommended filter we need to add multipath
blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:

  blacklist {
  wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V"
  wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K"
  }


Configure host? [yes,NO]

"
Does this mean that answering "yes" I will get both lvm and multipath
related files modified?

Right now my multipath is configured this way:

[root@ovirt01 ~]# grep -v "^#" /etc/multipath.conf | grep -v "^#" |
grep -v "^$"
defaults {
polling_interval5
no_path_retry   4
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds 4096
}
blacklist {
protocol "(scsi:adt|scsi:sbp)"
}
overrides {
  no_path_retry4
}
[root@ovirt01 ~]#

with blacklist explicit on both disks but inside different files:

root disk:
[root@ovirt01 ~]# cat /etc/multipath/conf.d/vdsm_blacklist.conf
# This file is managed by vdsm, 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Nir Soffer
On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
>
>
>
> On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi  
> wrote:
>>
>> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>>>
>>>
>>>
>>> Since there wasn't a filter set on the node, the 4.4.2 update added the 
>>> default filter for the root-lv pv
>>> if there was some filter set before the upgrade, it would not have been 
>>> added by the 4.4.2 update.


>>
>> Do you mean that I will get the same problem upgrading from 4.4.2 to an 
>> upcoming 4.4.3, as also now I don't have any filter set?
>> This would not be desirable
>
> Once you have got back into 4.4.2, it's recommended to set the lvm filter to 
> fit the pvs you use on your node
> for the local root pv you can run
> # vdsm-tool config-lvm-filter -y
> For the gluster bricks you'll need to add their uuids to the filter as well.

vdsm-tool is expected to add all the devices needed by the mounted
logical volumes, so adding devices manually should not be needed.

If this does not work please file a bug and include all the info to reproduce
the issue.

> The next upgrade should not set a filter on its own if one is already set.
>
>>
>>


 Right now only two problems:

 1) a long running problem that from engine web admin all the volumes are 
 seen as up and also the storage domains up, while only the hosted engine 
 one is up, while "data" and vmstore" are down, as I can verify from the 
 host, only one /rhev/data-center/ mount:

>> [snip]


 I already reported this, but I don't know if there is yet a bugzilla open 
 for it.
>>>
>>> Did you get any response for the original mail? haven't seen it on the 
>>> users-list.
>>
>>
>> I think it was this thread related to 4.4.0 released and question about 
>> auto-start of VMs.
>> A script from Derek that tested if domains were active and got false 
>> positive, and my comments about the same registered behaviour:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UOEL2SNHUUC7M4WAJ5NO/
>>
>> But I think there was no answer on that particular item/problem.
>> Indeed I think you can easily reproduce, I don't know if only with Gluster 
>> or also with other storage domains.
>> I don't know if it can have a part the fact that on the last host during a 
>> whole shutdown (and the only host in case of single host) you have to run  
>> the script
>> /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
>> otherwise you risk not to get a complete shutdown sometimes.
>> And perhaps this stop can have an influence on the following startup.
>> In any case the web admin gui (and the API access) should not show the 
>> domains active when they are not. I think there is a bug in the code that 
>> checks this.
>
> If it got no response so far, I think it could be helpful to file a bug with 
> the details of the setup and the steps involved here so it will get tracked.
>
>>
>>>

 2) I see that I cannot connect to cockpit console of node.

>> [snip]

 NOTE: the ost is not resolved by DNS but I put an entry in my hosts client.
>>>
>>> Might be required to set DNS for authenticity, maybe other members on the 
>>> list could tell better.
>>
>>
>> It would be the first time I see it. The access to web admin GUI works ok 
>> even without DNS resolution.
>> I'm not sure if I had the same problem with the cockpit host console on 
>> 4.4.0.
>
> Perhaps +Yedidyah Bar David  could help regarding cockpit web access.
>
>>
>> Gianluca
>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYWJPRKRESPBAR7I45QSVNTCVWNRZ5WQ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRSYXNIUTNXC7S2B4ALAQIWECBKUCR4H/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Amit Bawer
On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi 
wrote:

> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>
>>
>>
>> Since there wasn't a filter set on the node, the 4.4.2 update added the
>> default filter for the root-lv pv
>> if there was some filter set before the upgrade, it would not have been
>> added by the 4.4.2 update.
>>
>>>
>>>
> Do you mean that I will get the same problem upgrading from 4.4.2 to an
> upcoming 4.4.3, as also now I don't have any filter set?
> This would not be desirable
>
Once you have got back into 4.4.2, it's recommended to set the lvm filter
to fit the pvs you use on your node
for the local root pv you can run
# vdsm-tool config-lvm-filter -y
For the gluster bricks you'll need to add their uuids to the filter as well.
The next upgrade should not set a filter on its own if one is already set.


>
>
>>
>>> Right now only two problems:
>>>
>>> 1) a long running problem that from engine web admin all the volumes are
>>> seen as up and also the storage domains up, while only the hosted engine
>>> one is up, while "data" and vmstore" are down, as I can verify from the
>>> host, only one /rhev/data-center/ mount:
>>>
>>> [snip]
>
>>
>>> I already reported this, but I don't know if there is yet a bugzilla
>>> open for it.
>>>
>> Did you get any response for the original mail? haven't seen it on the
>> users-list.
>>
>
> I think it was this thread related to 4.4.0 released and question about
> auto-start of VMs.
> A script from Derek that tested if domains were active and got false
> positive, and my comments about the same registered behaviour:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UOEL2SNHUUC7M4WAJ5NO/
>
> But I think there was no answer on that particular item/problem.
> Indeed I think you can easily reproduce, I don't know if only with Gluster
> or also with other storage domains.
> I don't know if it can have a part the fact that on the last host during a
> whole shutdown (and the only host in case of single host) you have to run
> the script
> /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
> otherwise you risk not to get a complete shutdown sometimes.
> And perhaps this stop can have an influence on the following startup.
> In any case the web admin gui (and the API access) should not show the
> domains active when they are not. I think there is a bug in the code that
> checks this.
>
If it got no response so far, I think it could be helpful to file a bug
with the details of the setup and the steps involved here so it will get
tracked.


>
>>
>>> 2) I see that I cannot connect to cockpit console of node.
>>>
>>> [snip]
>
>> NOTE: the ost is not resolved by DNS but I put an entry in my hosts
>>> client.
>>>
>> Might be required to set DNS for authenticity, maybe other members on the
>> list could tell better.
>>
>
> It would be the first time I see it. The access to web admin GUI works ok
> even without DNS resolution.
> I'm not sure if I had the same problem with the cockpit host console on
> 4.4.0.
>
Perhaps +Yedidyah Bar David   could help regarding cockpit
web access.


> Gianluca
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYWJPRKRESPBAR7I45QSVNTCVWNRZ5WQ/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Gianluca Cecchi
On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:

>
>
> Since there wasn't a filter set on the node, the 4.4.2 update added the
> default filter for the root-lv pv
> if there was some filter set before the upgrade, it would not have been
> added by the 4.4.2 update.
>
>>
>>
Do you mean that I will get the same problem upgrading from 4.4.2 to an
upcoming 4.4.3, as also now I don't have any filter set?
This would not be desirable



>
>> Right now only two problems:
>>
>> 1) a long running problem that from engine web admin all the volumes are
>> seen as up and also the storage domains up, while only the hosted engine
>> one is up, while "data" and vmstore" are down, as I can verify from the
>> host, only one /rhev/data-center/ mount:
>>
>> [snip]

>
>> I already reported this, but I don't know if there is yet a bugzilla open
>> for it.
>>
> Did you get any response for the original mail? haven't seen it on the
> users-list.
>

I think it was this thread related to 4.4.0 released and question about
auto-start of VMs.
A script from Derek that tested if domains were active and got false
positive, and my comments about the same registered behaviour:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UOEL2SNHUUC7M4WAJ5NO/

But I think there was no answer on that particular item/problem.
Indeed I think you can easily reproduce, I don't know if only with Gluster
or also with other storage domains.
I don't know if it can have a part the fact that on the last host during a
whole shutdown (and the only host in case of single host) you have to run
the script
/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
otherwise you risk not to get a complete shutdown sometimes.
And perhaps this stop can have an influence on the following startup.
In any case the web admin gui (and the API access) should not show the
domains active when they are not. I think there is a bug in the code that
checks this.


>
>> 2) I see that I cannot connect to cockpit console of node.
>>
>> [snip]

> NOTE: the ost is not resolved by DNS but I put an entry in my hosts client.
>>
> Might be required to set DNS for authenticity, maybe other members on the
> list could tell better.
>

It would be the first time I see it. The access to web admin GUI works ok
even without DNS resolution.
I'm not sure if I had the same problem with the cockpit host console on
4.4.0.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6CDZDF2PRXN27FAQML2J26LZWXMEYCQ/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Amit Bawer
On Sun, Oct 4, 2020 at 2:07 AM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer  wrote:
>
>>
>>
>> On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:
>>
>>>
>>>
>>> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>>>
>>
>> Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
>> maintenance mode
>> if the fs is mounted as read only, try
>>
>> mount -o remount,rw /
>>
>> sync and try to reboot 4.4.2.
>>
>>
> Indeed if i run, when in emergency shell in 4.4.2, the command:
>
> lvs --config 'devices { filter = [ "a|.*|" ] }'
>
> I see also all the gluster volumes, so I think the update injected the
> nasty filter.
> Possibly during update the command
> # vdsm-tool config-lvm-filter -y
> was executed and erroneously created the filter?
>
Since there wasn't a filter set on the node, the 4.4.2 update added the
default filter for the root-lv pv
if there was some filter set before the upgrade, it would not have been
added by the 4.4.2 update.


> Anyway remounting read write the root filesystem and removing the filter
> line from lvm.conf and rebooting worked and 4.4.2 booted ok and I was able
> to exit global maintenance and have the engine up.
>
> Thanks Amit for the help and all the insights.
>
> Right now only two problems:
>
> 1) a long running problem that from engine web admin all the volumes are
> seen as up and also the storage domains up, while only the hosted engine
> one is up, while "data" and vmstore" are down, as I can verify from the
> host, only one /rhev/data-center/ mount:
>
> [root@ovirt01 ~]# df -h
> Filesystem  Size  Used Avail
> Use% Mounted on
> devtmpfs 16G 0   16G
> 0% /dev
> tmpfs16G   16K   16G
> 1% /dev/shm
> tmpfs16G   18M   16G
> 1% /run
> tmpfs16G 0   16G
> 0% /sys/fs/cgroup
> /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1  133G  3.9G  129G
> 3% /
> /dev/mapper/onn-tmp1014M   40M  975M
> 4% /tmp
> /dev/mapper/gluster_vg_sda-gluster_lv_engine100G  9.0G   91G
> 9% /gluster_bricks/engine
> /dev/mapper/gluster_vg_sda-gluster_lv_data  500G  126G  375G
>  26% /gluster_bricks/data
> /dev/mapper/gluster_vg_sda-gluster_lv_vmstore90G  6.9G   84G
> 8% /gluster_bricks/vmstore
> /dev/mapper/onn-home   1014M   40M  975M
> 4% /home
> /dev/sdb2   976M  307M  603M
>  34% /boot
> /dev/sdb1   599M  6.8M  593M
> 2% /boot/efi
> /dev/mapper/onn-var  15G  263M   15G
> 2% /var
> /dev/mapper/onn-var_log 8.0G  541M  7.5G
> 7% /var/log
> /dev/mapper/onn-var_crash10G  105M  9.9G
> 2% /var/crash
> /dev/mapper/onn-var_log_audit   2.0G   79M  2.0G
> 4% /var/log/audit
> ovirt01st.lutwyn.storage:/engine100G   10G   90G
>  10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
> tmpfs   3.2G 0  3.2G
> 0% /run/user/1000
> [root@ovirt01 ~]#
>
> I can also wait 10 minutes and no change. The way I use to exit from this
> stalled situation is power on a VM, so that obviously it fails
> VM f32 is down with error. Exit message: Unable to get volume size for
> domain d39ed9a3-3b10-46bf-b334-e8970f5deca1 volume
> 242d16c6-1fd9-4918-b9dd-0d477a86424c.
> 10/4/20 12:50:41 AM
>
> and suddenly all the data storage domains are deactivated (from engine
> point of view, because actually they were not active...):
> Storage Domain vmstore (Data Center Default) was deactivated by system
> because it's not visible by any of the hosts.
> 10/4/20 12:50:31 AM
>
> and I can go in Data Centers --> Default --> Storage and activate
> "vmstore" and "data" storage domains and suddenly I get them activated and
> filesystems mounted.
>
> [root@ovirt01 ~]# df -h | grep rhev
> ovirt01st.lutwyn.storage:/engine100G   10G   90G
>  10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
> ovirt01st.lutwyn.storage:/data  500G  131G  370G
>  27% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_data
> ovirt01st.lutwyn.storage:/vmstore90G  7.8G   83G
> 9% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_vmstore
> [root@ovirt01 ~]#
>
> and VM starts ok now.
>
> I already reported this, but I don't know if there is yet a bugzilla open
> for it.
>
Did you get any response for the original mail? haven't seen it on the
users-list.


> 2) I see that I cannot connect to cockpit console 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer  wrote:

>
>
> On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:
>
>>
>>
>> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>>
>
> Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
> maintenance mode
> if the fs is mounted as read only, try
>
> mount -o remount,rw /
>
> sync and try to reboot 4.4.2.
>
>
Indeed if i run, when in emergency shell in 4.4.2, the command:

lvs --config 'devices { filter = [ "a|.*|" ] }'

I see also all the gluster volumes, so I think the update injected the
nasty filter.
Possibly during update the command
# vdsm-tool config-lvm-filter -y
was executed and erroneously created the filter?

Anyway remounting read write the root filesystem and removing the filter
line from lvm.conf and rebooting worked and 4.4.2 booted ok and I was able
to exit global maintenance and have the engine up.

Thanks Amit for the help and all the insights.

Right now only two problems:

1) a long running problem that from engine web admin all the volumes are
seen as up and also the storage domains up, while only the hosted engine
one is up, while "data" and vmstore" are down, as I can verify from the
host, only one /rhev/data-center/ mount:

[root@ovirt01 ~]# df -h
Filesystem  Size  Used Avail
Use% Mounted on
devtmpfs 16G 0   16G
0% /dev
tmpfs16G   16K   16G
1% /dev/shm
tmpfs16G   18M   16G
1% /run
tmpfs16G 0   16G
0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1  133G  3.9G  129G
3% /
/dev/mapper/onn-tmp1014M   40M  975M
4% /tmp
/dev/mapper/gluster_vg_sda-gluster_lv_engine100G  9.0G   91G
9% /gluster_bricks/engine
/dev/mapper/gluster_vg_sda-gluster_lv_data  500G  126G  375G
 26% /gluster_bricks/data
/dev/mapper/gluster_vg_sda-gluster_lv_vmstore90G  6.9G   84G
8% /gluster_bricks/vmstore
/dev/mapper/onn-home   1014M   40M  975M
4% /home
/dev/sdb2   976M  307M  603M
 34% /boot
/dev/sdb1   599M  6.8M  593M
2% /boot/efi
/dev/mapper/onn-var  15G  263M   15G
2% /var
/dev/mapper/onn-var_log 8.0G  541M  7.5G
7% /var/log
/dev/mapper/onn-var_crash10G  105M  9.9G
2% /var/crash
/dev/mapper/onn-var_log_audit   2.0G   79M  2.0G
4% /var/log/audit
ovirt01st.lutwyn.storage:/engine100G   10G   90G
 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
tmpfs   3.2G 0  3.2G
0% /run/user/1000
[root@ovirt01 ~]#

I can also wait 10 minutes and no change. The way I use to exit from this
stalled situation is power on a VM, so that obviously it fails
VM f32 is down with error. Exit message: Unable to get volume size for
domain d39ed9a3-3b10-46bf-b334-e8970f5deca1 volume
242d16c6-1fd9-4918-b9dd-0d477a86424c.
10/4/20 12:50:41 AM

and suddenly all the data storage domains are deactivated (from engine
point of view, because actually they were not active...):
Storage Domain vmstore (Data Center Default) was deactivated by system
because it's not visible by any of the hosts.
10/4/20 12:50:31 AM

and I can go in Data Centers --> Default --> Storage and activate "vmstore"
and "data" storage domains and suddenly I get them activated and
filesystems mounted.

[root@ovirt01 ~]# df -h | grep rhev
ovirt01st.lutwyn.storage:/engine100G   10G   90G
 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
ovirt01st.lutwyn.storage:/data  500G  131G  370G
 27% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_data
ovirt01st.lutwyn.storage:/vmstore90G  7.8G   83G
9% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_vmstore
[root@ovirt01 ~]#

and VM starts ok now.

I already reported this, but I don't know if there is yet a bugzilla open
for it.

2) I see that I cannot connect to cockpit console of node.

In firefox (version 80) in my Fedora 31 I get:
"
Secure Connection Failed

An error occurred during a connection to ovirt01.lutwyn.local:9090.
PR_CONNECT_RESET_ERROR

The page you are trying to view cannot be shown because the
authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem.

Learn more…
"
In Chrome (build 85.0.4183.121)

"
Your connection is not private
Attackers might be trying to steal your information from
ovirt01.lutwyn.local (for example, 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:

>
>
> On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi 
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>> be mounted.
>>>
>>>
>> Yes, it is so
>> This is a testbed NUC I use for testing.
>> It has 2 disks, the one named sdb is where ovirt node has been installed.
>> The one named sda is where I configured gluster though the wizard,
>> configuring the 3 volumes for engine, vm, data
>>
>> The filter that you do have in the 4.4.2 screenshot should correspond to
>>> your root pv,
>>> you can confirm that by doing (replace the pv-uuid with the one from
>>> your filter):
>>>
>>> #udevadm info
>>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>> P:
>>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>>> N: sda2
>>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>>
>>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>>
>>
>> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
>> special file created of type /dev/disk/by-id/
>>
> What does "udevadm info" show for /dev/sdb3 on 4.4.2?
>
>
>> See here for udevadm command on 4.4.0 that shows sdb3 that is the
>> partition corresponding to PV of root disk
>>
>> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>>
>>
>>
>>> Can you give the output of lsblk on your node?
>>>
>>
>> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>>
>> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>>
>> ANd here lsblk as seen from 4.4.2 with an empty sda:
>>
>> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>>
>>
>>> Can you check that the same filter is in initramfs?
>>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>>
>>
>> Here the command from 4.4.0 that shows no filter
>>
>> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>>
>> And here from 4.4.2 emergency mode, where I have to use the path
>> /boot/ovirt-node-ng-4.4.2-0/initramfs-
>> because no initrd file in /boot (in screenshot you also see output of "ll
>> /boot)
>>
>> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>>
>>
>>
>>> We have the following tool on the hosts
>>> # vdsm-tool config-lvm-filter -y
>>> it only sets the filter for local lvm devices, this is run as part of
>>> deployment and upgrade when done from
>>> the engine.
>>>
>>> If you have other volumes which have to be mounted as part of your
>>> startup
>>> then you should add their uuids to the filter as well.
>>>
>>
>> I didn't anything special in 4.4.0: I installed node on the intended
>> disk, that was seen as sdb and then through the single node hci wizard I
>> configured the gluster volumes on sda
>>
>> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
>> command from 4.4.0 to correct initramfs of 4.4.2?
>>
> The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see
> what needs to be fixed in this case.
>
>
>> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
>> go with engine in 4.4.2?
>>
> Might work, probably not too tested.
>
> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>

Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
maintenance mode
if the fs is mounted as read only, try

mount -o remount,rw /

sync and try to reboot 4.4.2.


>
>>
>>
>> Thanks,
>> Gianluca
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZK3JS7OUIPU4H5KJLGOW7C5IPPAIYPTM/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>
>> From the info it seems that startup panics because gluster bricks cannot
>> be mounted.
>>
>>
> Yes, it is so
> This is a testbed NUC I use for testing.
> It has 2 disks, the one named sdb is where ovirt node has been installed.
> The one named sda is where I configured gluster though the wizard,
> configuring the 3 volumes for engine, vm, data
>
> The filter that you do have in the 4.4.2 screenshot should correspond to
>> your root pv,
>> you can confirm that by doing (replace the pv-uuid with the one from your
>> filter):
>>
>> #udevadm info
>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>> P:
>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>> N: sda2
>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>
>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>
>
> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
> special file created of type /dev/disk/by-id/
>
What does "udevadm info" show for /dev/sdb3 on 4.4.2?


> See here for udevadm command on 4.4.0 that shows sdb3 that is the
> partition corresponding to PV of root disk
>
> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>
>
>
>> Can you give the output of lsblk on your node?
>>
>
> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>
> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>
> ANd here lsblk as seen from 4.4.2 with an empty sda:
>
> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>
>
>> Can you check that the same filter is in initramfs?
>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>
>
> Here the command from 4.4.0 that shows no filter
>
> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>
> And here from 4.4.2 emergency mode, where I have to use the path
> /boot/ovirt-node-ng-4.4.2-0/initramfs-
> because no initrd file in /boot (in screenshot you also see output of "ll
> /boot)
>
> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>
>
>
>> We have the following tool on the hosts
>> # vdsm-tool config-lvm-filter -y
>> it only sets the filter for local lvm devices, this is run as part of
>> deployment and upgrade when done from
>> the engine.
>>
>> If you have other volumes which have to be mounted as part of your startup
>> then you should add their uuids to the filter as well.
>>
>
> I didn't anything special in 4.4.0: I installed node on the intended disk,
> that was seen as sdb and then through the single node hci wizard I
> configured the gluster volumes on sda
>
> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
> command from 4.4.0 to correct initramfs of 4.4.2?
>
The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see
what needs to be fixed in this case.


> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
> go with engine in 4.4.2?
>
Might work, probably not too tested.

For the gluster bricks being filtered out in 4.4.2, this seems like [1].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805


>
>
> Thanks,
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJHASPYE5PC2HFJC2LJDPGKV2JA7MAV/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
On Sat, Oct 3, 2020 at 6:33 PM Gianluca Cecchi 
wrote:

> Sorry I see that there was an error in the lsinitrd command in 4.4.2,
> inerting the "-f" position.
> Here the screenshot that shows anyway no filter active:
>
> https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=sharing
>
> Gianluca
>
>
> On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi 
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>> be mounted.
>>>
>>>
>> Yes, it is so
>> This is a testbed NUC I use for testing.
>> It has 2 disks, the one named sdb is where ovirt node has been installed.
>> The one named sda is where I configured gluster though the wizard,
>> configuring the 3 volumes for engine, vm, data
>>
>> The filter that you do have in the 4.4.2 screenshot should correspond to
>>> your root pv,
>>> you can confirm that by doing (replace the pv-uuid with the one from
>>> your filter):
>>>
>>> #udevadm info
>>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>> P:
>>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>>> N: sda2
>>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>>
>>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>>
>>
>> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
>> special file created of type /dev/disk/by-id/
>> See here for udevadm command on 4.4.0 that shows sdb3 that is the
>> partition corresponding to PV of root disk
>>
>> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>>
>>
>>
>>> Can you give the output of lsblk on your node?
>>>
>>
>> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>>
>> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>>
>> ANd here lsblk as seen from 4.4.2 with an empty sda:
>>
>> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>>
>>
>>> Can you check that the same filter is in initramfs?
>>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>>
>>
>> Here the command from 4.4.0 that shows no filter
>>
>> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>>
>> And here from 4.4.2 emergency mode, where I have to use the path
>> /boot/ovirt-node-ng-4.4.2-0/initramfs-
>> because no initrd file in /boot (in screenshot you also see output of "ll
>> /boot)
>>
>> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>>
>>
>>
>>> We have the following tool on the hosts
>>> # vdsm-tool config-lvm-filter -y
>>> it only sets the filter for local lvm devices, this is run as part of
>>> deployment and upgrade when done from
>>> the engine.
>>>
>>> If you have other volumes which have to be mounted as part of your
>>> startup
>>> then you should add their uuids to the filter as well.
>>>
>>
>> I didn't anything special in 4.4.0: I installed node on the intended
>> disk, that was seen as sdb and then through the single node hci wizard I
>> configured the gluster volumes on sda
>>
>> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
>> command from 4.4.0 to correct initramfs of 4.4.2?
>>
>> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
>> go with engine in 4.4.2?
>>
>> Thanks,
>> Gianluca
>>
>

Two many photos... ;-)

I used the 4.4.0 initramfs.
Here the output using the 4.4.2 initramfs

https://drive.google.com/file/d/1yLzJzokK5C1LHNuFbNoXWHXfzFncXe0O/view?usp=sharing

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEWNQHRAMLKAL3XZOJGOOQ3J77DAMHFA/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
Sorry I see that there was an error in the lsinitrd command in 4.4.2,
inerting the "-f" position.
Here the screenshot that shows anyway no filter active:
https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=sharing

Gianluca


On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>
>> From the info it seems that startup panics because gluster bricks cannot
>> be mounted.
>>
>>
> Yes, it is so
> This is a testbed NUC I use for testing.
> It has 2 disks, the one named sdb is where ovirt node has been installed.
> The one named sda is where I configured gluster though the wizard,
> configuring the 3 volumes for engine, vm, data
>
> The filter that you do have in the 4.4.2 screenshot should correspond to
>> your root pv,
>> you can confirm that by doing (replace the pv-uuid with the one from your
>> filter):
>>
>> #udevadm info
>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>> P:
>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>> N: sda2
>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>
>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>
>
> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
> special file created of type /dev/disk/by-id/
> See here for udevadm command on 4.4.0 that shows sdb3 that is the
> partition corresponding to PV of root disk
>
> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>
>
>
>> Can you give the output of lsblk on your node?
>>
>
> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>
> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>
> ANd here lsblk as seen from 4.4.2 with an empty sda:
>
> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>
>
>> Can you check that the same filter is in initramfs?
>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>
>
> Here the command from 4.4.0 that shows no filter
>
> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>
> And here from 4.4.2 emergency mode, where I have to use the path
> /boot/ovirt-node-ng-4.4.2-0/initramfs-
> because no initrd file in /boot (in screenshot you also see output of "ll
> /boot)
>
> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>
>
>
>> We have the following tool on the hosts
>> # vdsm-tool config-lvm-filter -y
>> it only sets the filter for local lvm devices, this is run as part of
>> deployment and upgrade when done from
>> the engine.
>>
>> If you have other volumes which have to be mounted as part of your startup
>> then you should add their uuids to the filter as well.
>>
>
> I didn't anything special in 4.4.0: I installed node on the intended disk,
> that was seen as sdb and then through the single node hci wizard I
> configured the gluster volumes on sda
>
> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
> command from 4.4.0 to correct initramfs of 4.4.2?
>
> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
> go with engine in 4.4.2?
>
> Thanks,
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GMP6UHTWIR3BCCNEJT6KU4QRORFSC5DB/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:

> From the info it seems that startup panics because gluster bricks cannot
> be mounted.
>
>
Yes, it is so
This is a testbed NUC I use for testing.
It has 2 disks, the one named sdb is where ovirt node has been installed.
The one named sda is where I configured gluster though the wizard,
configuring the 3 volumes for engine, vm, data

The filter that you do have in the 4.4.2 screenshot should correspond to
> your root pv,
> you can confirm that by doing (replace the pv-uuid with the one from your
> filter):
>
> #udevadm info
>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
> P:
> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
> N: sda2
> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>
> In this case sda2 is the partition of the root-lv shown by lsblk.
>

Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
special file created of type /dev/disk/by-id/
See here for udevadm command on 4.4.0 that shows sdb3 that is the partition
corresponding to PV of root disk
https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing



> Can you give the output of lsblk on your node?
>

Here lsblk as seen by 4.4.0 with gluster volumes on sda:
https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing

ANd here lsblk as seen from 4.4.2 with an empty sda:
https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing


> Can you check that the same filter is in initramfs?
> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>

Here the command from 4.4.0 that shows no filter
https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing

And here from 4.4.2 emergency mode, where I have to use the path
/boot/ovirt-node-ng-4.4.2-0/initramfs-
because no initrd file in /boot (in screenshot you also see output of "ll
/boot)
https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing



> We have the following tool on the hosts
> # vdsm-tool config-lvm-filter -y
> it only sets the filter for local lvm devices, this is run as part of
> deployment and upgrade when done from
> the engine.
>
> If you have other volumes which have to be mounted as part of your startup
> then you should add their uuids to the filter as well.
>

I didn't anything special in 4.4.0: I installed node on the intended disk,
that was seen as sdb and then through the single node hci wizard I
configured the gluster volumes on sda

Any suggestion on what to do on 4.4.2 initrd or running correct dracut
command from 4.4.0 to correct initramfs of 4.4.2?

BTW: could in the mean time if necessary also boot from 4.4.0 and let it go
with engine in 4.4.2?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VV4NAZ6XFITMYPRDMHRWVWOMFCASTKY6/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
>From the info it seems that startup panics because gluster bricks cannot be
mounted.

The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv,
you can confirm that by doing (replace the pv-uuid with the one from your
filter):

#udevadm info
 /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
P:
/devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
N: sda2
S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ

In this case sda2 is the partition of the root-lv shown by lsblk.

Can you give the output of lsblk on your node?

Can you check that the same filter is in initramfs?
# lsinitrd -f  /etc/lvm/lvm.conf | grep filter

We have the following tool on the hosts
# vdsm-tool config-lvm-filter -y
it only sets the filter for local lvm devices, this is run as part of
deployment and upgrade when done from
the engine.

If you have other volumes which have to be mounted as part of your startup
then you should add their uuids to the filter as well.


On Sat, Oct 3, 2020 at 3:19 PM Gianluca Cecchi 
wrote:

> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>> gianluca.cec...@gmail.com> ha scritto:
>>
>>>
>>>
>>> On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
>>> wrote:
>>>
 oVirt Node 4.4.2 is now generally available

 The oVirt project is pleased to announce the general availability of
 oVirt Node 4.4.2 , as of September 25th, 2020.

 This release completes the oVirt 4.4.2 release published on September
 17th

>>>
>>> Thanks fir the news!
>>>
>>> How to prevent hosts entering emergency mode after upgrade from oVirt
 4.4.1

 Due to Bug 1837864
  - Host enter
 emergency mode after upgrading to latest build

 If you have your root file system on a multipath device on your hosts
 you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
 your host entering emergency mode.

 In order to prevent this be sure to upgrade oVirt Engine first, then on
 your hosts:

1.

Remove the current lvm filter while still on 4.4.1, or in emergency
mode (if rebooted).
2.

Reboot.
3.

Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
4.

Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.

Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with
the correct filter configuration
6.

Reboot.



>>> What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
>>> to follow the same steps as if I were in 4.4.1 or what?
>>> I would like to avoid going through 4.4.1 if possible.
>>>
>>
>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>> should work for the same case.
>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>
>> # grep '^filter = ' /etc/lvm/lvm.conf
>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>
>>
>>
>>
>>>
>>> Thanks,
>>> Gianluca
>>>
>>
>>
> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
> and gluster wizard and never update until now.
> Updated self hosted engine to 4.4.2 without problems.
>
> My host doesn't have any filter or global_filter set up in lvm.conf  in
> 4.4.0.
>
> So I update it:
>
> [root@ovirt01 vdsm]# yum update
> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM
> CEST.
> Dependencies resolved.
>
> 
>  Package ArchitectureVersion
> Repository  Size
>
> 
> Installing:
>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
> ovirt-4.4  782 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>
> Transaction Summary
>
> 
> Install  1 Package
>
> Total download size: 782 M
> Is this ok [y/N]: y
> Downloading Packages:
> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
> 145 MB 01:45 ETA
>
>
> 
> Total   5.3
> MB/s | 782 MB 02:28
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:
>  1/1
>   Running scriptlet: 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-09-25 Thread Sandro Bonazzola
Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

>
>
> On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
> wrote:
>
>> oVirt Node 4.4.2 is now generally available
>>
>> The oVirt project is pleased to announce the general availability of
>> oVirt Node 4.4.2 , as of September 25th, 2020.
>>
>> This release completes the oVirt 4.4.2 release published on September 17th
>>
>
> Thanks fir the news!
>
> How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
>>
>>
>> Due to Bug 1837864  -
>> Host enter emergency mode after upgrading to latest build
>>
>> If you have your root file system on a multipath device on your hosts you
>> should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your
>> host entering emergency mode.
>>
>> In order to prevent this be sure to upgrade oVirt Engine first, then on
>> your hosts:
>>
>>1.
>>
>>Remove the current lvm filter while still on 4.4.1, or in emergency
>>mode (if rebooted).
>>2.
>>
>>Reboot.
>>3.
>>
>>Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
>>4.
>>
>>Run vdsm-tool config-lvm-filter to confirm there is a new filter in
>>place.
>>5.
>>
>>Only if not using oVirt Node:
>>- run "dracut --force --add multipath” to rebuild initramfs with the
>>correct filter configuration
>>6.
>>
>>Reboot.
>>
>>
>>
> What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to
> follow the same steps as if I were in 4.4.1 or what?
> I would like to avoid going through 4.4.1 if possible.
>

I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
should work for the same case.
The problematic filter in /etc/lvm/lvm.conf looks like:

# grep '^filter = ' /etc/lvm/lvm.conf
filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]




>
> Thanks,
> Gianluca
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNHBFXEE5W3NTR3BPPNZXH2QQAO4MJD6/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-09-25 Thread Gianluca Cecchi
On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
wrote:

> oVirt Node 4.4.2 is now generally available
>
> The oVirt project is pleased to announce the general availability of oVirt
> Node 4.4.2 , as of September 25th, 2020.
>
> This release completes the oVirt 4.4.2 release published on September 17th
>

Thanks fir the news!

How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
>
>
> Due to Bug 1837864  -
> Host enter emergency mode after upgrading to latest build
>
> If you have your root file system on a multipath device on your hosts you
> should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your
> host entering emergency mode.
>
> In order to prevent this be sure to upgrade oVirt Engine first, then on
> your hosts:
>
>1.
>
>Remove the current lvm filter while still on 4.4.1, or in emergency
>mode (if rebooted).
>2.
>
>Reboot.
>3.
>
>Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
>4.
>
>Run vdsm-tool config-lvm-filter to confirm there is a new filter in
>place.
>5.
>
>Only if not using oVirt Node:
>- run "dracut --force --add multipath” to rebuild initramfs with the
>correct filter configuration
>6.
>
>Reboot.
>
>
>
What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to
follow the same steps as if I were in 4.4.1 or what?
I would like to avoid going through 4.4.1 if possible.

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PI2LS3NCULH3FXQKBSB4IGXLKUBXE6UL/