[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-23 Thread Nir Soffer
On Wed, Mar 23, 2022 at 6:04 PM Abe E  wrote:

> After running : yum reinstall ovirt-node-ng-image-update
> It re-installed the ovirt node and I was able to start VDSM again aswell
> as the ovirt-ha-broker an ovirt-ha-agent.
>
> I was still unable to activate the 2nd Node in the engine so I tried to
> re-install with engine deploy and it was able to complete past the previous
> VDSM issue it had.
>
> Thank You for your help in regards to the LVM issues I was having, noted
> for future reference!
>

Great that you managed to recover, but if reinstalling fixed the issue, it
means that there is some issue with the node upgrade.

Sandro, do you think we need a bug for this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UBZBQY5TYZPY55HZJ5ULX4RT4ZGBSSAX/


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-23 Thread Abe E
After running : yum reinstall ovirt-node-ng-image-update 
It re-installed the ovirt node and I was able to start VDSM again aswell as the 
ovirt-ha-broker an ovirt-ha-agent.

I was still unable to activate the 2nd Node in the engine so I tried to 
re-install with engine deploy and it was able to complete past the previous 
VDSM issue it had.

Thank You for your help in regards to the LVM issues I was having, noted for 
future reference!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAGIM4OYGTVFBM42DHPLTEJZ7B2CL2T3/


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Abe E
Thank You.
I've tried to re-install the current ovirt package only 
(ovirt-engine-appliance-4.4-20220308105414.1.el8.x86_64.rpm). 

My Ovirt Hose is running 4.4. I am not sure if we can fully rely on the info 
ovirt reports being that it sees this server as "not responding" but these are 
specs:

OS Version:
RHEL - 8.6.2109.0 - 1.el8
OS Description:
oVirt Node 4.4.10
Kernel Version:
4.18.0 - 358.el8.x86_64
KVM Version:
6.0.0 - 33.el8s
LIBVIRT Version:
libvirt-7.10.0-1.module_el8.6.0+1046+bd8eec5e
VDSM Version:
vdsm-4.40.100.2-1.el8
SPICE Version:
0.14.3 - 4.el8
GlusterFS Version:
glusterfs-8.6-2.el8s
CEPH Version:
librbd1-16.2.7-1.el8s
Open vSwitch Version:
openvswitch-2.11-1.el8
Nmstate Version:
nmstate-1.2.1-0.2.alpha2.el8
Kernel Features:
MDS: (Mitigation: Clear CPU buffers; SMT vulnerable), L1TF: (Mitigation: PTE 
Inversion; VMX: conditional cache flushes, SMT vulnerable), SRBDS: (Not 
affected), MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: 
usercopy/swapgs barriers and __user pointer sanitization), SPECTRE_V2: 
(Mitigation: Full generic retpoline, IBPB: conditional, IBRS_FW, STIBP: 
conditional, RSB filling), ITLB_MULTIHIT: (KVM: Mitigation: VMX disabled), 
TSX_ASYNC_ABORT: (Mitigation: Clear CPU buffers; SMT vulnerable), 
SPEC_STORE_BYPASS: (Mitigation: Speculative Store Bypass disabled via prctl and 
seccomp)
VNC Encryption:
Disabled
FIPS mode enabled:
Disabled
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CCVHO5V7E3Z4OSG6LAIAQOA6GVL76F3T/


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Nir Soffer
On Tue, Mar 22, 2022 at 8:14 PM Abe E  wrote:

> Apologies, here it is
> [root@ovirt-2 ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Found these mounted logical volumes on this host:
>
>   logical volume:  /dev/mapper/gluster_vg_sda4-gluster_lv_data
>   mountpoint:  /gluster_bricks/data
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU
>
>   logical volume:  /dev/mapper/gluster_vg_sda4-gluster_lv_engine
>   mountpoint:  /gluster_bricks/engine
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU
>
>   logical volume:  /dev/mapper/onn-home
>   mountpoint:  /home
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
>
>   logical volume:
> /dev/mapper/onn-ovirt--node--ng--4.4.10.1--0.20220202.0+1
>   mountpoint:  /
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
>
>   logical volume:  /dev/mapper/onn-swap
>   mountpoint:  [SWAP]
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
>
>   logical volume:  /dev/mapper/onn-tmp
>   mountpoint:  /tmp
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
>
>   logical volume:  /dev/mapper/onn-var
>   mountpoint:  /var
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
>
>   logical volume:  /dev/mapper/onn-var_crash
>   mountpoint:  /var/crash
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
>
>   logical volume:  /dev/mapper/onn-var_log
>   mountpoint:  /var/log
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
>
>   logical volume:  /dev/mapper/onn-var_log_audit
>   mountpoint:  /var/log/audit
>   devices:
>  /dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY
>
> This is the recommended LVM filter for this host:
>
>   filter = [
> "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|",
> "a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|",
> "r|.*|" ]
>
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> This is the current LVM filter:
>
>   filter = [
> "a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|",
> "a|^/dev/sda|", "r|.*|" ]
>
> To use the recommended filter we need to add multipath
> blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
>
>   blacklist {
>   wwid "364cd98f06762ec0029afc17a03e0cf6a"
>   }
>
>
> WARNING: The current LVM filter does not match the recommended filter,
> Vdsm cannot configure the filter automatically.
>
> Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
> 'devices' section to the recommended value.
>
> Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
> recommended 'blacklist' section.
>
> It is recommended to reboot to verify the new configuration.
>
> After configuring the LVM to the recommended
>
> I adjusted to the recommended filter although it is still returning the
> same results when i run the vdsm-tool config-lvm-filter command. Instead I
> did as you mentioned, I commented out my current filter and ran the
> vdsm-tool config-lvm-filter and it configured successfully and I rebooted
> the node.
>
> Now on boot it is returning the following which looks alot better.
> Analyzing host...
> LVM filter is already configured for Vdsm
>

Good, we solved the storage issue.


> Now my error on re-install is Host ovirt-2... installation failed. Task
> Configure host for vdsm failed to execute. THat was just a re-install and
> this host currently has and the log returns this output, let me know if
> youd like more from it but this is where it errors out it seems:
>
> "start_line" : 215,
> "end_line" : 216,
> "runner_ident" : "ddb84e00-aa0a-11ec-98dc-00163e6f31f1",
> "event" : "runner_on_failed",
> "pid" : 83339,
> "created" : "2022-03-22T18:09:08.381022",
> "parent_uuid" : "00163e6f-31f1-a3fb-8e1d-0201",
> "event_data" : {
>   "playbook" : "ovirt-host-deploy.yml",
>   "playbook_uuid" : "2e84fbd4-8368-463e-82e7-3f457ae702d4",
>   "play" : "all",
>   "play_uuid" : "00163e6f-31f1-a3fb-8e1d-000b",
>   "play_pattern" : "all",
>   "task" : "Configure host for vdsm",
>   "task_uuid" : "00163e6f-31f1-a3fb-8e1d-0201",
>   "task_action" : "command",
>   "task_args" : "",
>   "task_path" :
> "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm/tasks/configure.yml:27",
>   "role" : "ovirt-host-deploy-vdsm",
>   "host" : "ovirt-2..com",
>   "remote_addr" : "ovirt-2..com",
>   "res" : {
> "msg" : "non-zero return code",
> "cmd" : [ "vdsm-tool", "configure", "--force" ],
> 

[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Abe E
Apologies, here it is
[root@ovirt-2 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/gluster_vg_sda4-gluster_lv_data
  mountpoint:  /gluster_bricks/data
  devices: 
/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU

  logical volume:  /dev/mapper/gluster_vg_sda4-gluster_lv_engine
  mountpoint:  /gluster_bricks/engine
  devices: 
/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU

  logical volume:  /dev/mapper/onn-home
  mountpoint:  /home
  devices: 
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY

  logical volume:  /dev/mapper/onn-ovirt--node--ng--4.4.10.1--0.20220202.0+1
  mountpoint:  /
  devices: 
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY

  logical volume:  /dev/mapper/onn-swap
  mountpoint:  [SWAP]
  devices: 
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY

  logical volume:  /dev/mapper/onn-tmp
  mountpoint:  /tmp
  devices: 
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY

  logical volume:  /dev/mapper/onn-var
  mountpoint:  /var
  devices: 
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY

  logical volume:  /dev/mapper/onn-var_crash
  mountpoint:  /var/crash
  devices: 
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY

  logical volume:  /dev/mapper/onn-var_log
  mountpoint:  /var/log
  devices: 
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY

  logical volume:  /dev/mapper/onn-var_log_audit
  mountpoint:  /var/log/audit
  devices: 
/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY

This is the recommended LVM filter for this host:

  filter = [ 
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|", 
"a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|", 
"r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

This is the current LVM filter:

  filter = [ 
"a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|", 
"a|^/dev/sda|", "r|.*|" ]

To use the recommended filter we need to add multipath
blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:

  blacklist {
  wwid "364cd98f06762ec0029afc17a03e0cf6a"
  }


WARNING: The current LVM filter does not match the recommended filter,
Vdsm cannot configure the filter automatically.

Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
'devices' section to the recommended value.

Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
recommended 'blacklist' section.

It is recommended to reboot to verify the new configuration.

After configuring the LVM to the recommended 

I adjusted to the recommended filter although it is still returning the same 
results when i run the vdsm-tool config-lvm-filter command. Instead I did as 
you mentioned, I commented out my current filter and ran the vdsm-tool 
config-lvm-filter and it configured successfully and I rebooted the node.

Now on boot it is returning the following which looks alot better.
Analyzing host...
LVM filter is already configured for Vdsm


Now my error on re-install is Host ovirt-2... installation failed. Task 
Configure host for vdsm failed to execute. THat was just a re-install and this 
host currently has and the log returns this output, let me know if youd like 
more from it but this is where it errors out it seems:

"start_line" : 215,
"end_line" : 216,
"runner_ident" : "ddb84e00-aa0a-11ec-98dc-00163e6f31f1",
"event" : "runner_on_failed",
"pid" : 83339,
"created" : "2022-03-22T18:09:08.381022",
"parent_uuid" : "00163e6f-31f1-a3fb-8e1d-0201",
"event_data" : {
  "playbook" : "ovirt-host-deploy.yml",
  "playbook_uuid" : "2e84fbd4-8368-463e-82e7-3f457ae702d4",
  "play" : "all",
  "play_uuid" : "00163e6f-31f1-a3fb-8e1d-000b",
  "play_pattern" : "all",
  "task" : "Configure host for vdsm",
  "task_uuid" : "00163e6f-31f1-a3fb-8e1d-0201",
  "task_action" : "command",
  "task_args" : "",
  "task_path" : 
"/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm/tasks/configure.yml:27",
  "role" : "ovirt-host-deploy-vdsm",
  "host" : "ovirt-2..com",
  "remote_addr" : "ovirt-2..com",
  "res" : {
"msg" : "non-zero return code",
"cmd" : [ "vdsm-tool", "configure", "--force" ],
"stdout" : "\nChecking configuration status...\n\nlibvirt is already 
configured for vdsm\nSUCCESS: ssl configured to true. No conflicts\nManaged 
volume database is already configured\nlvm is configured for vdsm\nsanlock is 

[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Nir Soffer
On Tue, Mar 22, 2022 at 7:17 PM Nir Soffer  wrote:
>
> On Tue, Mar 22, 2022 at 6:57 PM Abe E  wrote:
> >
> > Yes it throws the following:
> >
> > This is the recommended LVM filter for this host:
> >
> >   filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|",
"a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|",
"r|.*|" ]
>
> This is not complete output - did you strip the lines explaining why
> we need this
> filter?
>
> > This filter allows LVM to access the local devices used by the
> > hypervisor, but not shared storage owned by Vdsm. If you add a new
> > device to the volume group, you will need to edit the filter manually.
> >
> > This is the current LVM filter:
> >
> >   filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|",
"a|^/dev/sda|", "r|.*|" ]
>
> So the issue is that you likely have a stale lvm filter for a device
> which is not
> used by the host.
>
> >
> > To use the recommended filter we need to add multipath
> > blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
> >
> >   blacklist {
> >   wwid "364cd98f06762ec0029afc17a03e0cf6a"
> >   }
> >
> >
> > WARNING: The current LVM filter does not match the recommended filter,
> > Vdsm cannot configure the filter automatically.
> >
> > Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
> > 'devices' section to the recommended value.
> >
> > Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
> > recommended 'blacklist' section.
> >
> > It is recommended to reboot to verify the new configuration.
> >
> >
> >
> >
> > I updated my entry to the following (Blacklist is already configured
from before):
> >   filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|","a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|","a|^/dev/sda|","r|.*|"
]
> >
> >
> > although then it threw this error
> >
> > [root@ovirt-2 ~]# vdsm-tool config-lvm-filter
> > Analyzing host...
> > Parse error at byte 106979 (line 2372): unexpected token
> >   Failed to load config file /etc/lvm/lvm.conf
> > Traceback (most recent call last):
> >   File "/usr/bin/vdsm-tool", line 209, in main
> > return tool_command[cmd]["command"](*args)
> >   File
"/usr/lib/python3.6/site-packages/vdsm/tool/config_lvm_filter.py", line 65,
in main
> > mounts = lvmfilter.find_lvm_mounts()
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
line 170, in find_lvm_mounts
> > vg_name, tags = vg_info(name)
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
line 467, in vg_info
> > lv_path
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
line 566, in _run
> > out = subprocess.check_output(args)
> >   File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
> > **kwargs).stdout
> >   File "/usr/lib64/python3.6/subprocess.py", line 438, in run
> > output=stdout, stderr=stderr)
> > subprocess.CalledProcessError: Command '['/usr/sbin/lvm', 'lvs',
'--noheadings', '--readonly', '--config', 'devices {filter=["a|.*|"ed
non-zero exit status 4.
>
>
> I'm not sure if this error comes from the code configuring lvm filter,
> or from lvm.
>
> The best way to handle this depends on why you have lvm filter that
> vdsm-tool cannot handle.
>
> If you know why the lvm filter is set to the current value, and you
> know that the system actually
> need all the devices in the filter, you can keep the current lvm filter.
>
> If you don't know why the curent lvm filter is set to this value, you
> can remove the lvm filter
> from lvm.conf, and run "vdsm-tool config-lvm-filter" to let the tool
> configure the default filter.
>
> In general, the lvm filter allows the host to access the devices
> needed by the host, for
> example the root file system.
>
> If you are not sure what are the required devices, please share the
> the *complete* output
> of running "vdsm-tool config-lvm-filter", with lvm.conf that does not
> include any filter.

Example of running config-lvm-filter on RHEL 8.6 host with oVirt 4.5:

# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/rhel-root
  mountpoint:  /
  devices: /dev/vda2

  logical volume:  /dev/mapper/rhel-swap
  mountpoint:  [SWAP]
  devices: /dev/vda2

  logical volume:  /dev/mapper/test-lv1
  mountpoint:  /data
  devices: /dev/mapper/0QEMU_QEMU_HARDDISK_123456789

Configuring LVM system.devices.
Devices for following VGs will be imported:

 rhel, test

Configure host? [yes,NO]

The tool shows that we have 3 mounted logical volumes, and suggest to
configure lvmdevices file for 2 volume groups.

On oVirt 4.4, the configuration method is lvm filter, and suggests the
required
filter for the mounted logical volumes.
___
Users mailing list -- users@ovirt.org
To unsubscribe 

[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Nir Soffer
On Tue, Mar 22, 2022 at 6:57 PM Abe E  wrote:
>
> Yes it throws the following:
>
> This is the recommended LVM filter for this host:
>
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|", 
> "r|.*|" ]

This is not complete output - did you strip the lines explaining why
we need this
filter?

> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> This is the current LVM filter:
>
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|", 
> "a|^/dev/sda|", "r|.*|" ]

So the issue is that you likely have a stale lvm filter for a device
which is not
used by the host.

>
> To use the recommended filter we need to add multipath
> blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
>
>   blacklist {
>   wwid "364cd98f06762ec0029afc17a03e0cf6a"
>   }
>
>
> WARNING: The current LVM filter does not match the recommended filter,
> Vdsm cannot configure the filter automatically.
>
> Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
> 'devices' section to the recommended value.
>
> Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
> recommended 'blacklist' section.
>
> It is recommended to reboot to verify the new configuration.
>
>
>
>
> I updated my entry to the following (Blacklist is already configured from 
> before):
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|","a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|","a|^/dev/sda|","r|.*|"
>  ]
>
>
> although then it threw this error
>
> [root@ovirt-2 ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Parse error at byte 106979 (line 2372): unexpected token
>   Failed to load config file /etc/lvm/lvm.conf
> Traceback (most recent call last):
>   File "/usr/bin/vdsm-tool", line 209, in main
> return tool_command[cmd]["command"](*args)
>   File "/usr/lib/python3.6/site-packages/vdsm/tool/config_lvm_filter.py", 
> line 65, in main
> mounts = lvmfilter.find_lvm_mounts()
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 
> 170, in find_lvm_mounts
> vg_name, tags = vg_info(name)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 
> 467, in vg_info
> lv_path
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 
> 566, in _run
> out = subprocess.check_output(args)
>   File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
> **kwargs).stdout
>   File "/usr/lib64/python3.6/subprocess.py", line 438, in run
> output=stdout, stderr=stderr)
> subprocess.CalledProcessError: Command '['/usr/sbin/lvm', 'lvs', 
> '--noheadings', '--readonly', '--config', 'devices {filter=["a|.*|"ed 
> non-zero exit status 4.


I'm not sure if this error comes from the code configuring lvm filter,
or from lvm.

The best way to handle this depends on why you have lvm filter that
vdsm-tool cannot handle.

If you know why the lvm filter is set to the current value, and you
know that the system actually
need all the devices in the filter, you can keep the current lvm filter.

If you don't know why the curent lvm filter is set to this value, you
can remove the lvm filter
from lvm.conf, and run "vdsm-tool config-lvm-filter" to let the tool
configure the default filter.

In general, the lvm filter allows the host to access the devices
needed by the host, for
example the root file system.

If you are not sure what are the required devices, please share the
the *complete* output
of running "vdsm-tool config-lvm-filter", with lvm.conf that does not
include any filter.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BEWQBE5MRLPL7PDK3CPECZDOS5Q62X7/


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Abe E
Yes it throws the following:

This is the recommended LVM filter for this host:

  filter = [ 
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|", 
"a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|", 
"r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

This is the current LVM filter:

  filter = [ 
"a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|", 
"a|^/dev/sda|", "r|.*|" ]

To use the recommended filter we need to add multipath
blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:

  blacklist {
  wwid "364cd98f06762ec0029afc17a03e0cf6a"
  }


WARNING: The current LVM filter does not match the recommended filter,
Vdsm cannot configure the filter automatically.

Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
'devices' section to the recommended value.

Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
recommended 'blacklist' section.

It is recommended to reboot to verify the new configuration.




I updated my entry to the following (Blacklist is already configured from 
before):
  filter = [ 
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|","a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|","a|^/dev/sda|","r|.*|"
 ]


although then it threw this error

[root@ovirt-2 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Parse error at byte 106979 (line 2372): unexpected token
  Failed to load config file /etc/lvm/lvm.conf
Traceback (most recent call last):
  File "/usr/bin/vdsm-tool", line 209, in main
return tool_command[cmd]["command"](*args)
  File "/usr/lib/python3.6/site-packages/vdsm/tool/config_lvm_filter.py", line 
65, in main
mounts = lvmfilter.find_lvm_mounts()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 170, 
in find_lvm_mounts
vg_name, tags = vg_info(name)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 467, 
in vg_info
lv_path
  File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 566, 
in _run
out = subprocess.check_output(args)
  File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
  File "/usr/lib64/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/usr/sbin/lvm', 'lvs', 
'--noheadings', '--readonly', '--config', 'devices {filter=["a|.*|"ed non-zero 
exit status 4.




I thought maybe it required a reboot although now it failed to reboot so I am 
physically going to check on it.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PSEJMBG7Y45P4THKYO2RAPWG6LDPLDOI/


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Nir Soffer
On Tue, Mar 22, 2022 at 6:09 PM Abe E  wrote:
>
> Interestingly enough I am able to re-install ovirt from the engine to a 
> certain point.
> I ran a re-install and it failed asking me to run vdsm-tool config-lvm-filter
> Error: Installing Host ovirt-2... Check for LVM filter configuration error: 
> Cannot configure LVM filter on host, please run: vdsm-tool config-lvm-filter.

Did you try to run it?

Please the complete output of running:

   vdsm-tool config-lvm-filter

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MIHVMDALDKFZGVTIDGHO6C4UBFL63XLG/


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Abe E
Interestingly enough I am able to re-install ovirt from the engine to a certain 
point.
I ran a re-install and it failed asking me to run vdsm-tool config-lvm-filter
Error: Installing Host ovirt-2... Check for LVM filter configuration error: 
Cannot configure LVM filter on host, please run: vdsm-tool config-lvm-filter.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WEF6VN5VB3YD6OWQHODG6MWNJKPLQMXU/