[ovirt-users] Re: ovirt-engine and host certification is expired in ovirt4.0

2020-10-03 Thread momokch--- via Users
hello , thank you for your reply,

i am not try to set the host into maintenance because many services is running 
on the host.
my question is if i set the host into maintenance mode is it all of the server 
running on the host may be shutdown by itself because there is no other host is 
available on my ovirt-engine.
in my case some of the vm server have the problem with no response so i cannot 
shutdown that server
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EL6GSOL4XKHM7GS3KFHMQO3DKJXMYHWB/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer  wrote:

>
>
> On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:
>
>>
>>
>> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>>
>
> Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
> maintenance mode
> if the fs is mounted as read only, try
>
> mount -o remount,rw /
>
> sync and try to reboot 4.4.2.
>
>
Indeed if i run, when in emergency shell in 4.4.2, the command:

lvs --config 'devices { filter = [ "a|.*|" ] }'

I see also all the gluster volumes, so I think the update injected the
nasty filter.
Possibly during update the command
# vdsm-tool config-lvm-filter -y
was executed and erroneously created the filter?

Anyway remounting read write the root filesystem and removing the filter
line from lvm.conf and rebooting worked and 4.4.2 booted ok and I was able
to exit global maintenance and have the engine up.

Thanks Amit for the help and all the insights.

Right now only two problems:

1) a long running problem that from engine web admin all the volumes are
seen as up and also the storage domains up, while only the hosted engine
one is up, while "data" and vmstore" are down, as I can verify from the
host, only one /rhev/data-center/ mount:

[root@ovirt01 ~]# df -h
Filesystem  Size  Used Avail
Use% Mounted on
devtmpfs 16G 0   16G
0% /dev
tmpfs16G   16K   16G
1% /dev/shm
tmpfs16G   18M   16G
1% /run
tmpfs16G 0   16G
0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1  133G  3.9G  129G
3% /
/dev/mapper/onn-tmp1014M   40M  975M
4% /tmp
/dev/mapper/gluster_vg_sda-gluster_lv_engine100G  9.0G   91G
9% /gluster_bricks/engine
/dev/mapper/gluster_vg_sda-gluster_lv_data  500G  126G  375G
 26% /gluster_bricks/data
/dev/mapper/gluster_vg_sda-gluster_lv_vmstore90G  6.9G   84G
8% /gluster_bricks/vmstore
/dev/mapper/onn-home   1014M   40M  975M
4% /home
/dev/sdb2   976M  307M  603M
 34% /boot
/dev/sdb1   599M  6.8M  593M
2% /boot/efi
/dev/mapper/onn-var  15G  263M   15G
2% /var
/dev/mapper/onn-var_log 8.0G  541M  7.5G
7% /var/log
/dev/mapper/onn-var_crash10G  105M  9.9G
2% /var/crash
/dev/mapper/onn-var_log_audit   2.0G   79M  2.0G
4% /var/log/audit
ovirt01st.lutwyn.storage:/engine100G   10G   90G
 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
tmpfs   3.2G 0  3.2G
0% /run/user/1000
[root@ovirt01 ~]#

I can also wait 10 minutes and no change. The way I use to exit from this
stalled situation is power on a VM, so that obviously it fails
VM f32 is down with error. Exit message: Unable to get volume size for
domain d39ed9a3-3b10-46bf-b334-e8970f5deca1 volume
242d16c6-1fd9-4918-b9dd-0d477a86424c.
10/4/20 12:50:41 AM

and suddenly all the data storage domains are deactivated (from engine
point of view, because actually they were not active...):
Storage Domain vmstore (Data Center Default) was deactivated by system
because it's not visible by any of the hosts.
10/4/20 12:50:31 AM

and I can go in Data Centers --> Default --> Storage and activate "vmstore"
and "data" storage domains and suddenly I get them activated and
filesystems mounted.

[root@ovirt01 ~]# df -h | grep rhev
ovirt01st.lutwyn.storage:/engine100G   10G   90G
 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
ovirt01st.lutwyn.storage:/data  500G  131G  370G
 27% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_data
ovirt01st.lutwyn.storage:/vmstore90G  7.8G   83G
9% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_vmstore
[root@ovirt01 ~]#

and VM starts ok now.

I already reported this, but I don't know if there is yet a bugzilla open
for it.

2) I see that I cannot connect to cockpit console of node.

In firefox (version 80) in my Fedora 31 I get:
"
Secure Connection Failed

An error occurred during a connection to ovirt01.lutwyn.local:9090.
PR_CONNECT_RESET_ERROR

The page you are trying to view cannot be shown because the
authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem.

Learn more…
"
In Chrome (build 85.0.4183.121)

"
Your connection is not private
Attackers might be trying to steal your information from
ovirt01.lutwyn.local (for example, 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:

>
>
> On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi 
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>> be mounted.
>>>
>>>
>> Yes, it is so
>> This is a testbed NUC I use for testing.
>> It has 2 disks, the one named sdb is where ovirt node has been installed.
>> The one named sda is where I configured gluster though the wizard,
>> configuring the 3 volumes for engine, vm, data
>>
>> The filter that you do have in the 4.4.2 screenshot should correspond to
>>> your root pv,
>>> you can confirm that by doing (replace the pv-uuid with the one from
>>> your filter):
>>>
>>> #udevadm info
>>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>> P:
>>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>>> N: sda2
>>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>>
>>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>>
>>
>> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
>> special file created of type /dev/disk/by-id/
>>
> What does "udevadm info" show for /dev/sdb3 on 4.4.2?
>
>
>> See here for udevadm command on 4.4.0 that shows sdb3 that is the
>> partition corresponding to PV of root disk
>>
>> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>>
>>
>>
>>> Can you give the output of lsblk on your node?
>>>
>>
>> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>>
>> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>>
>> ANd here lsblk as seen from 4.4.2 with an empty sda:
>>
>> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>>
>>
>>> Can you check that the same filter is in initramfs?
>>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>>
>>
>> Here the command from 4.4.0 that shows no filter
>>
>> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>>
>> And here from 4.4.2 emergency mode, where I have to use the path
>> /boot/ovirt-node-ng-4.4.2-0/initramfs-
>> because no initrd file in /boot (in screenshot you also see output of "ll
>> /boot)
>>
>> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>>
>>
>>
>>> We have the following tool on the hosts
>>> # vdsm-tool config-lvm-filter -y
>>> it only sets the filter for local lvm devices, this is run as part of
>>> deployment and upgrade when done from
>>> the engine.
>>>
>>> If you have other volumes which have to be mounted as part of your
>>> startup
>>> then you should add their uuids to the filter as well.
>>>
>>
>> I didn't anything special in 4.4.0: I installed node on the intended
>> disk, that was seen as sdb and then through the single node hci wizard I
>> configured the gluster volumes on sda
>>
>> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
>> command from 4.4.0 to correct initramfs of 4.4.2?
>>
> The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see
> what needs to be fixed in this case.
>
>
>> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
>> go with engine in 4.4.2?
>>
> Might work, probably not too tested.
>
> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>

Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
maintenance mode
if the fs is mounted as read only, try

mount -o remount,rw /

sync and try to reboot 4.4.2.


>
>>
>>
>> Thanks,
>> Gianluca
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZK3JS7OUIPU4H5KJLGOW7C5IPPAIYPTM/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>
>> From the info it seems that startup panics because gluster bricks cannot
>> be mounted.
>>
>>
> Yes, it is so
> This is a testbed NUC I use for testing.
> It has 2 disks, the one named sdb is where ovirt node has been installed.
> The one named sda is where I configured gluster though the wizard,
> configuring the 3 volumes for engine, vm, data
>
> The filter that you do have in the 4.4.2 screenshot should correspond to
>> your root pv,
>> you can confirm that by doing (replace the pv-uuid with the one from your
>> filter):
>>
>> #udevadm info
>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>> P:
>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>> N: sda2
>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>
>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>
>
> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
> special file created of type /dev/disk/by-id/
>
What does "udevadm info" show for /dev/sdb3 on 4.4.2?


> See here for udevadm command on 4.4.0 that shows sdb3 that is the
> partition corresponding to PV of root disk
>
> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>
>
>
>> Can you give the output of lsblk on your node?
>>
>
> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>
> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>
> ANd here lsblk as seen from 4.4.2 with an empty sda:
>
> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>
>
>> Can you check that the same filter is in initramfs?
>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>
>
> Here the command from 4.4.0 that shows no filter
>
> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>
> And here from 4.4.2 emergency mode, where I have to use the path
> /boot/ovirt-node-ng-4.4.2-0/initramfs-
> because no initrd file in /boot (in screenshot you also see output of "ll
> /boot)
>
> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>
>
>
>> We have the following tool on the hosts
>> # vdsm-tool config-lvm-filter -y
>> it only sets the filter for local lvm devices, this is run as part of
>> deployment and upgrade when done from
>> the engine.
>>
>> If you have other volumes which have to be mounted as part of your startup
>> then you should add their uuids to the filter as well.
>>
>
> I didn't anything special in 4.4.0: I installed node on the intended disk,
> that was seen as sdb and then through the single node hci wizard I
> configured the gluster volumes on sda
>
> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
> command from 4.4.0 to correct initramfs of 4.4.2?
>
The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see
what needs to be fixed in this case.


> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
> go with engine in 4.4.2?
>
Might work, probably not too tested.

For the gluster bricks being filtered out in 4.4.2, this seems like [1].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805


>
>
> Thanks,
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJHASPYE5PC2HFJC2LJDPGKV2JA7MAV/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
On Sat, Oct 3, 2020 at 6:33 PM Gianluca Cecchi 
wrote:

> Sorry I see that there was an error in the lsinitrd command in 4.4.2,
> inerting the "-f" position.
> Here the screenshot that shows anyway no filter active:
>
> https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=sharing
>
> Gianluca
>
>
> On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi 
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>> be mounted.
>>>
>>>
>> Yes, it is so
>> This is a testbed NUC I use for testing.
>> It has 2 disks, the one named sdb is where ovirt node has been installed.
>> The one named sda is where I configured gluster though the wizard,
>> configuring the 3 volumes for engine, vm, data
>>
>> The filter that you do have in the 4.4.2 screenshot should correspond to
>>> your root pv,
>>> you can confirm that by doing (replace the pv-uuid with the one from
>>> your filter):
>>>
>>> #udevadm info
>>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>> P:
>>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>>> N: sda2
>>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>>
>>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>>
>>
>> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
>> special file created of type /dev/disk/by-id/
>> See here for udevadm command on 4.4.0 that shows sdb3 that is the
>> partition corresponding to PV of root disk
>>
>> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>>
>>
>>
>>> Can you give the output of lsblk on your node?
>>>
>>
>> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>>
>> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>>
>> ANd here lsblk as seen from 4.4.2 with an empty sda:
>>
>> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>>
>>
>>> Can you check that the same filter is in initramfs?
>>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>>
>>
>> Here the command from 4.4.0 that shows no filter
>>
>> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>>
>> And here from 4.4.2 emergency mode, where I have to use the path
>> /boot/ovirt-node-ng-4.4.2-0/initramfs-
>> because no initrd file in /boot (in screenshot you also see output of "ll
>> /boot)
>>
>> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>>
>>
>>
>>> We have the following tool on the hosts
>>> # vdsm-tool config-lvm-filter -y
>>> it only sets the filter for local lvm devices, this is run as part of
>>> deployment and upgrade when done from
>>> the engine.
>>>
>>> If you have other volumes which have to be mounted as part of your
>>> startup
>>> then you should add their uuids to the filter as well.
>>>
>>
>> I didn't anything special in 4.4.0: I installed node on the intended
>> disk, that was seen as sdb and then through the single node hci wizard I
>> configured the gluster volumes on sda
>>
>> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
>> command from 4.4.0 to correct initramfs of 4.4.2?
>>
>> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
>> go with engine in 4.4.2?
>>
>> Thanks,
>> Gianluca
>>
>

Two many photos... ;-)

I used the 4.4.0 initramfs.
Here the output using the 4.4.2 initramfs

https://drive.google.com/file/d/1yLzJzokK5C1LHNuFbNoXWHXfzFncXe0O/view?usp=sharing

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEWNQHRAMLKAL3XZOJGOOQ3J77DAMHFA/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
Sorry I see that there was an error in the lsinitrd command in 4.4.2,
inerting the "-f" position.
Here the screenshot that shows anyway no filter active:
https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=sharing

Gianluca


On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>
>> From the info it seems that startup panics because gluster bricks cannot
>> be mounted.
>>
>>
> Yes, it is so
> This is a testbed NUC I use for testing.
> It has 2 disks, the one named sdb is where ovirt node has been installed.
> The one named sda is where I configured gluster though the wizard,
> configuring the 3 volumes for engine, vm, data
>
> The filter that you do have in the 4.4.2 screenshot should correspond to
>> your root pv,
>> you can confirm that by doing (replace the pv-uuid with the one from your
>> filter):
>>
>> #udevadm info
>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>> P:
>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>> N: sda2
>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>
>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>
>
> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
> special file created of type /dev/disk/by-id/
> See here for udevadm command on 4.4.0 that shows sdb3 that is the
> partition corresponding to PV of root disk
>
> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>
>
>
>> Can you give the output of lsblk on your node?
>>
>
> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>
> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>
> ANd here lsblk as seen from 4.4.2 with an empty sda:
>
> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>
>
>> Can you check that the same filter is in initramfs?
>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>
>
> Here the command from 4.4.0 that shows no filter
>
> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>
> And here from 4.4.2 emergency mode, where I have to use the path
> /boot/ovirt-node-ng-4.4.2-0/initramfs-
> because no initrd file in /boot (in screenshot you also see output of "ll
> /boot)
>
> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>
>
>
>> We have the following tool on the hosts
>> # vdsm-tool config-lvm-filter -y
>> it only sets the filter for local lvm devices, this is run as part of
>> deployment and upgrade when done from
>> the engine.
>>
>> If you have other volumes which have to be mounted as part of your startup
>> then you should add their uuids to the filter as well.
>>
>
> I didn't anything special in 4.4.0: I installed node on the intended disk,
> that was seen as sdb and then through the single node hci wizard I
> configured the gluster volumes on sda
>
> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
> command from 4.4.0 to correct initramfs of 4.4.2?
>
> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
> go with engine in 4.4.2?
>
> Thanks,
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GMP6UHTWIR3BCCNEJT6KU4QRORFSC5DB/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:

> From the info it seems that startup panics because gluster bricks cannot
> be mounted.
>
>
Yes, it is so
This is a testbed NUC I use for testing.
It has 2 disks, the one named sdb is where ovirt node has been installed.
The one named sda is where I configured gluster though the wizard,
configuring the 3 volumes for engine, vm, data

The filter that you do have in the 4.4.2 screenshot should correspond to
> your root pv,
> you can confirm that by doing (replace the pv-uuid with the one from your
> filter):
>
> #udevadm info
>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
> P:
> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
> N: sda2
> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>
> In this case sda2 is the partition of the root-lv shown by lsblk.
>

Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
special file created of type /dev/disk/by-id/
See here for udevadm command on 4.4.0 that shows sdb3 that is the partition
corresponding to PV of root disk
https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing



> Can you give the output of lsblk on your node?
>

Here lsblk as seen by 4.4.0 with gluster volumes on sda:
https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing

ANd here lsblk as seen from 4.4.2 with an empty sda:
https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing


> Can you check that the same filter is in initramfs?
> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>

Here the command from 4.4.0 that shows no filter
https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing

And here from 4.4.2 emergency mode, where I have to use the path
/boot/ovirt-node-ng-4.4.2-0/initramfs-
because no initrd file in /boot (in screenshot you also see output of "ll
/boot)
https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing



> We have the following tool on the hosts
> # vdsm-tool config-lvm-filter -y
> it only sets the filter for local lvm devices, this is run as part of
> deployment and upgrade when done from
> the engine.
>
> If you have other volumes which have to be mounted as part of your startup
> then you should add their uuids to the filter as well.
>

I didn't anything special in 4.4.0: I installed node on the intended disk,
that was seen as sdb and then through the single node hci wizard I
configured the gluster volumes on sda

Any suggestion on what to do on 4.4.2 initrd or running correct dracut
command from 4.4.0 to correct initramfs of 4.4.2?

BTW: could in the mean time if necessary also boot from 4.4.0 and let it go
with engine in 4.4.2?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VV4NAZ6XFITMYPRDMHRWVWOMFCASTKY6/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
>From the info it seems that startup panics because gluster bricks cannot be
mounted.

The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv,
you can confirm that by doing (replace the pv-uuid with the one from your
filter):

#udevadm info
 /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
P:
/devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
N: sda2
S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ

In this case sda2 is the partition of the root-lv shown by lsblk.

Can you give the output of lsblk on your node?

Can you check that the same filter is in initramfs?
# lsinitrd -f  /etc/lvm/lvm.conf | grep filter

We have the following tool on the hosts
# vdsm-tool config-lvm-filter -y
it only sets the filter for local lvm devices, this is run as part of
deployment and upgrade when done from
the engine.

If you have other volumes which have to be mounted as part of your startup
then you should add their uuids to the filter as well.


On Sat, Oct 3, 2020 at 3:19 PM Gianluca Cecchi 
wrote:

> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>> gianluca.cec...@gmail.com> ha scritto:
>>
>>>
>>>
>>> On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
>>> wrote:
>>>
 oVirt Node 4.4.2 is now generally available

 The oVirt project is pleased to announce the general availability of
 oVirt Node 4.4.2 , as of September 25th, 2020.

 This release completes the oVirt 4.4.2 release published on September
 17th

>>>
>>> Thanks fir the news!
>>>
>>> How to prevent hosts entering emergency mode after upgrade from oVirt
 4.4.1

 Due to Bug 1837864
  - Host enter
 emergency mode after upgrading to latest build

 If you have your root file system on a multipath device on your hosts
 you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
 your host entering emergency mode.

 In order to prevent this be sure to upgrade oVirt Engine first, then on
 your hosts:

1.

Remove the current lvm filter while still on 4.4.1, or in emergency
mode (if rebooted).
2.

Reboot.
3.

Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
4.

Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.

Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with
the correct filter configuration
6.

Reboot.



>>> What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
>>> to follow the same steps as if I were in 4.4.1 or what?
>>> I would like to avoid going through 4.4.1 if possible.
>>>
>>
>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>> should work for the same case.
>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>
>> # grep '^filter = ' /etc/lvm/lvm.conf
>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>
>>
>>
>>
>>>
>>> Thanks,
>>> Gianluca
>>>
>>
>>
> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
> and gluster wizard and never update until now.
> Updated self hosted engine to 4.4.2 without problems.
>
> My host doesn't have any filter or global_filter set up in lvm.conf  in
> 4.4.0.
>
> So I update it:
>
> [root@ovirt01 vdsm]# yum update
> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM
> CEST.
> Dependencies resolved.
>
> 
>  Package ArchitectureVersion
> Repository  Size
>
> 
> Installing:
>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
> ovirt-4.4  782 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>
> Transaction Summary
>
> 
> Install  1 Package
>
> Total download size: 782 M
> Is this ok [y/N]: y
> Downloading Packages:
> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
> 145 MB 01:45 ETA
>
>
> 
> Total   5.3
> MB/s | 782 MB 02:28
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:
>  1/1
>   Running scriptlet: 

[ovirt-users] Re: CPU Type / Cluster

2020-10-03 Thread Michael Jones
to get these two hosts into a cluster would i need to castrate them down
to nehalem, or would i be able to botch the db for the 2nd host from
"EPYC-IBPB" to "Opteron_G5"?

I don't really want to drop them down to nehalem, so either I can botch
the 2nd cpu so they are both on opteron_G5 or i'll have to buy a new CPU
for host1 to bring it up to "EPYC-IBPB";

I have;

host1;

# vdsm-client Host getCapabilities | grep cpuFlags | tr "," "\n" | grep
model_ | sed 's/"//' | sort -n
model_486
model_Conroe
model_cpu64-rhel6
model_kvm32
model_kvm64
model_Nehalem
model_Opteron_G1
model_Opteron_G2
model_Opteron_G3
model_Opteron_G4
model_Opteron_G5  "AMD FX(tm)-8350 Eight-Core Processor"
model_Penryn
model_pentium
model_pentium2
model_pentium3
model_qemu32
model_qemu64
model_Westmere

host2;

# vdsm-client Host getCapabilities | grep cpuFlags | tr "," "\n" | grep
model_ | sed 's/"//' | sort -n
model_486
model_Conroe
model_Dhyana
model_EPYC
model_EPYC-IBPB  "AMD Ryzen 7 1700X Eight-Core Processor"
model_kvm32
model_kvm64
model_Nehalem
model_Opteron_G1
model_Opteron_G2
model_Opteron_G3
model_Penryn
model_pentium
model_pentium2
model_pentium3
model_qemu32
model_qemu64
model_SandyBridge
model_Westmere

Thanks,

Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NULJX3JB736A4MHC2GX7ADDW3ZT3C37O/