On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer <aba...@redhat.com> wrote:

>
>
> On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi <gianluca.cec...@gmail.com>
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer <aba...@redhat.com> wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>> be mounted.
>>>
>>>
>> Yes, it is so
>> This is a testbed NUC I use for testing.
>> It has 2 disks, the one named sdb is where ovirt node has been installed.
>> The one named sda is where I configured gluster though the wizard,
>> configuring the 3 volumes for engine, vm, data
>>
>> The filter that you do have in the 4.4.2 screenshot should correspond to
>>> your root pv,
>>> you can confirm that by doing (replace the pv-uuid with the one from
>>> your filter):
>>>
>>> #udevadm info
>>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>> P:
>>> /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>>> N: sda2
>>> S: disk/by-id/ata-QEMU_HARDDISK_QM00003-part2
>>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>>
>>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>>
>>
>> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
>> special file created of type /dev/disk/by-id/....
>>
> What does "udevadm info" show for /dev/sdb3 on 4.4.2?
>
>
>> See here for udevadm command on 4.4.0 that shows sdb3 that is the
>> partition corresponding to PV of root disk
>>
>> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>>
>>
>>
>>> Can you give the output of lsblk on your node?
>>>
>>
>> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>>
>> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>>
>> ANd here lsblk as seen from 4.4.2 with an empty sda:
>>
>> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>>
>>
>>> Can you check that the same filter is in initramfs?
>>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>>
>>
>> Here the command from 4.4.0 that shows no filter
>>
>> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>>
>> And here from 4.4.2 emergency mode, where I have to use the path
>> /boot/ovirt-node-ng-4.4.2-0..../initramfs-....
>> because no initrd file in /boot (in screenshot you also see output of "ll
>> /boot)
>>
>> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>>
>>
>>
>>> We have the following tool on the hosts
>>> # vdsm-tool config-lvm-filter -y
>>> it only sets the filter for local lvm devices, this is run as part of
>>> deployment and upgrade when done from
>>> the engine.
>>>
>>> If you have other volumes which have to be mounted as part of your
>>> startup
>>> then you should add their uuids to the filter as well.
>>>
>>
>> I didn't anything special in 4.4.0: I installed node on the intended
>> disk, that was seen as sdb and then through the single node hci wizard I
>> configured the gluster volumes on sda
>>
>> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
>> command from 4.4.0 to correct initramfs of 4.4.2?
>>
> The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see
> what needs to be fixed in this case.
>
>
>> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
>> go with engine in 4.4.2?
>>
> Might work, probably not too tested.
>
> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>

Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
maintenance mode
if the fs is mounted as read only, try

mount -o remount,rw /

sync and try to reboot 4.4.2.


>
>>
>>
>> Thanks,
>> Gianluca
>>
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZK3JS7OUIPU4H5KJLGOW7C5IPPAIYPTM/

Reply via email to