[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-27 Thread femi adegoke
For me, I only had this problem on 1 of the 4 hosts in my cluster.
Here is what I did:
# imgbase layout
result was "ovirt-node-ng-4.2.6.1-0.20180913.0"

# imgbase base --remove ovirt-node-ng-4.2.6.1-0.20180913.0
(used to remove the "failed" 4.2.6.1 update)

# yum reinstall ovirt-node-ng-image-update.noarch
(used to re-install 4.2.6.1)

Now the host is updated to 4.2.6.1
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/43BDS7LGT7G4ZYYQ2Y3QL2O64UYNA3J2/


[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-27 Thread o . krueckel
Any news on this?

o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BVN6W3DJ4QMXSC3BQSFSXKATZS565P7I/


[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread femi adegoke
Yuval,

Do we just rollback to previous version or will there be a fix?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PO7SNTTV7MTIFMOFPNC53ZAZB2WK2P62/


[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread Yuval Turgeman
Wait, root disk's uuid is different ??

On Mon, Sep 24, 2018, 15:39 Yuval Turgeman  wrote:

> Bootid is there, so that's not the issue.. can you run `imgbase --debug
> check` ?
>
> On Mon, Sep 24, 2018, 15:22 KRUECKEL OLIVER 
> wrote:
>
>>
>> --
>> *Von:* Yuval Turgeman 
>> *Gesendet:* Montag, 24. September 2018 11:29:31
>> *An:* Sandro Bonazzola
>> *Cc:* KRUECKEL OLIVER; Ryan Barry; Chen Shao; Ying Cui; users
>> *Betreff:* Re: [ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status
>> degraded
>>
>> Can you share the output from `cat /proc/cmdline` and perhaps the
>> grub.conf ?
>> Imgbased adds a bootid and perhaps it's missing for some reason
>>
>> On Mon, Sep 24, 2018, 11:59 Sandro Bonazzola  wrote:
>>
>>> Adding some people who may help understanding what happened and work on
>>> a solution for this.
>>>
>>> Il giorno lun 24 set 2018 alle ore 10:30 
>>> ha scritto:
>>>
>>>> Identified this problem for some time (running after about 3. 4th
>>>> update always in this problem), have always helped me with a new
>>>> installtion. Now I've looked at it more closely (maybe this information
>>>> will help the knower).
>>>>
>>>> Installation runs without a problem, reboot, system runs as expected,
>>>> repeated reboot => node status: DEGRADED
>>>>
>>>> What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not
>>>> mount /boot/ and /boot/efi !
>>>>
>>>> in dmesg all 3rd partitions are displayed. with parted as well, after
>>>> partprobe are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or
>>>> mount /boot/efi does not issue an error, the partionenen however are not
>>>> mounted (df -h does not show it and umount /boot or /boot/efi says so too).
>>>>
>>>> I have the same problem with
>>>> ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm
>>>>
>>>> If I undo the installation (imgbase base
>>>> --remove=ovirt-node-ng-image-update-4.2 . and yum remove
>>>> ovirt-node-ng-image-update-4.2 .) and repeat the installation, I can
>>>> reproduce the behavior (install, reboot, every works with the new version,
>>>> reboot, node status: DEGRADED)
>>>>
>>>> Have this behavior on four test servers.
>>>>
>>>>
>>>> here df -h, ll /boot after the 1st reboot and the output of imgbase
>>>> layout and imgbase w
>>>>
>>>> [root@ovirt-n1 ~]# df -h
>>>> Dateisystem
>>>> Größe Benutzt Verf. Verw% Eingehängt auf
>>>> /dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1
>>>> 183G3,3G  170G2% /
>>>> devtmpfs
>>>>  95G   0   95G0% /dev
>>>> tmpfs
>>>> 95G 16K   95G1% /dev/shm
>>>> tmpfs
>>>> 95G 42M   95G1% /run
>>>> tmpfs
>>>> 95G   0   95G0% /sys/fs/cgroup
>>>> /dev/mapper/onn_ovirt--n1-var
>>>> 15G187M   14G2% /var
>>>> /dev/sda2
>>>>  976M417M  492M   46% /boot
>>>> /dev/mapper/onn_ovirt--n1-tmp
>>>>  976M3,4M  906M1% /tmp
>>>> /dev/mapper/onn_ovirt--n1-home
>>>> 976M2,6M  907M1% /home
>>>> /dev/mapper/onn_ovirt--n1-var_log
>>>>  7,8G414M  7,0G6% /var/log
>>>> /dev/mapper/onn_ovirt--n1-var_log_audit
>>>>  2,0G 39M  1,8G3% /var/log/audit
>>>> /dev/mapper/onn_ovirt--n1-var_crash
>>>>  9,8G 37M  9,2G1% /var/crash
>>>> /dev/sda1
>>>>  200M9,8M  191M5% /boot/efi
>>>> gluster01.test.visa-ad.at:/st1
>>>> 805G 71G  734G9%
>>>> /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
>>>> glustermount:iso
>>>>  50G 20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
>>>> glustermount:export
>>>>  100G4,8G   96G5%
>>>> /rhev/data-center/mnt/glusterSD/glustermount:export
>>>> tmpfs
>>>> 19G   0   19G0% /run/user/0
>>>> [root@ovirt-n1 ~]# ll /boot
>>>> insgesamt 187016
>>>> -rw-r--r--. 1 root root   140971  8. Mai 10:37
>>>> config-3.10.0-693.21.1.el7.x86_64
>>>> -rw-r--r--. 1 root root   147859 24. Sep 09:04
>>>> config-3.10.0-862.11.6.el7.x86_64
>>>> drwx

[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread Yuval Turgeman
Bootid is there, so that's not the issue.. can you run `imgbase --debug
check` ?

On Mon, Sep 24, 2018, 15:22 KRUECKEL OLIVER 
wrote:

>
> --
> *Von:* Yuval Turgeman 
> *Gesendet:* Montag, 24. September 2018 11:29:31
> *An:* Sandro Bonazzola
> *Cc:* KRUECKEL OLIVER; Ryan Barry; Chen Shao; Ying Cui; users
> *Betreff:* Re: [ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status
> degraded
>
> Can you share the output from `cat /proc/cmdline` and perhaps the
> grub.conf ?
> Imgbased adds a bootid and perhaps it's missing for some reason
>
> On Mon, Sep 24, 2018, 11:59 Sandro Bonazzola  wrote:
>
>> Adding some people who may help understanding what happened and work on a
>> solution for this.
>>
>> Il giorno lun 24 set 2018 alle ore 10:30 
>> ha scritto:
>>
>>> Identified this problem for some time (running after about 3. 4th update
>>> always in this problem), have always helped me with a new installtion. Now
>>> I've looked at it more closely (maybe this information will help the
>>> knower).
>>>
>>> Installation runs without a problem, reboot, system runs as expected,
>>> repeated reboot => node status: DEGRADED
>>>
>>> What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not
>>> mount /boot/ and /boot/efi !
>>>
>>> in dmesg all 3rd partitions are displayed. with parted as well, after
>>> partprobe are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or
>>> mount /boot/efi does not issue an error, the partionenen however are not
>>> mounted (df -h does not show it and umount /boot or /boot/efi says so too).
>>>
>>> I have the same problem with
>>> ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm
>>>
>>> If I undo the installation (imgbase base
>>> --remove=ovirt-node-ng-image-update-4.2 . and yum remove
>>> ovirt-node-ng-image-update-4.2 .) and repeat the installation, I can
>>> reproduce the behavior (install, reboot, every works with the new version,
>>> reboot, node status: DEGRADED)
>>>
>>> Have this behavior on four test servers.
>>>
>>>
>>> here df -h, ll /boot after the 1st reboot and the output of imgbase
>>> layout and imgbase w
>>>
>>> [root@ovirt-n1 ~]# df -h
>>> DateisystemGröße
>>> Benutzt Verf. Verw% Eingehängt auf
>>> /dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1
>>> 183G3,3G  170G2% /
>>> devtmpfs
>>>  95G   0   95G0% /dev
>>> tmpfs
>>> 95G 16K   95G1% /dev/shm
>>> tmpfs
>>> 95G 42M   95G1% /run
>>> tmpfs
>>> 95G   0   95G0% /sys/fs/cgroup
>>> /dev/mapper/onn_ovirt--n1-var
>>> 15G187M   14G2% /var
>>> /dev/sda2
>>>  976M417M  492M   46% /boot
>>> /dev/mapper/onn_ovirt--n1-tmp
>>>  976M3,4M  906M1% /tmp
>>> /dev/mapper/onn_ovirt--n1-home
>>> 976M2,6M  907M1% /home
>>> /dev/mapper/onn_ovirt--n1-var_log
>>>  7,8G414M  7,0G6% /var/log
>>> /dev/mapper/onn_ovirt--n1-var_log_audit
>>>  2,0G 39M  1,8G3% /var/log/audit
>>> /dev/mapper/onn_ovirt--n1-var_crash
>>>  9,8G 37M  9,2G1% /var/crash
>>> /dev/sda1
>>>  200M9,8M  191M5% /boot/efi
>>> gluster01.test.visa-ad.at:/st1
>>> 805G 71G  734G9%
>>> /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
>>> glustermount:iso
>>>  50G 20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
>>> glustermount:export
>>>  100G4,8G   96G5%
>>> /rhev/data-center/mnt/glusterSD/glustermount:export
>>> tmpfs
>>> 19G   0   19G0% /run/user/0
>>> [root@ovirt-n1 ~]# ll /boot
>>> insgesamt 187016
>>> -rw-r--r--. 1 root root   140971  8. Mai 10:37
>>> config-3.10.0-693.21.1.el7.x86_64
>>> -rw-r--r--. 1 root root   147859 24. Sep 09:04
>>> config-3.10.0-862.11.6.el7.x86_64
>>> drwx--. 3 root root16384  1. Jan 1970  efi
>>> -rw-r--r--. 1 root root   192572  5. Nov 2016  elf-memtest86+-5.01
>>> drwxr-xr-x. 2 root root 4096  4. Mai 18:34 extlinux
>>> drwxr-xr-x. 2 root root 4096  4. Mai 18:16 grub
>>> drwx--. 5 root root 4096  8. Mai 08:45 grub2
>>> -rw---. 1 root root 59917312  8. Mai 10:39
>>> initramfs-3.10.0-693.21.1.el7.x86_64.img
>>>

[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread KRUECKEL OLIVER


Von: Yuval Turgeman 
Gesendet: Montag, 24. September 2018 11:29:31
An: Sandro Bonazzola
Cc: KRUECKEL OLIVER; Ryan Barry; Chen Shao; Ying Cui; users
Betreff: Re: [ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

Can you share the output from `cat /proc/cmdline` and perhaps the grub.conf ?
Imgbased adds a bootid and perhaps it's missing for some reason

On Mon, Sep 24, 2018, 11:59 Sandro Bonazzola 
mailto:sbona...@redhat.com>> wrote:
Adding some people who may help understanding what happened and work on a 
solution for this.

Il giorno lun 24 set 2018 alle ore 10:30 
mailto:o.kruec...@cardcomplete.com>> ha scritto:
Identified this problem for some time (running after about 3. 4th update always 
in this problem), have always helped me with a new installtion. Now I've looked 
at it more closely (maybe this information will help the knower).

Installation runs without a problem, reboot, system runs as expected, repeated 
reboot => node status: DEGRADED

What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not mount 
/boot/ and /boot/efi !

in dmesg all 3rd partitions are displayed. with parted as well, after partprobe 
are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or mount 
/boot/efi does not issue an error, the partionenen however are not mounted (df 
-h does not show it and umount /boot or /boot/efi says so too).

I have the same problem with 
ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm

If I undo the installation (imgbase base 
--remove=ovirt-node-ng-image-update-4.2 . and yum remove 
ovirt-node-ng-image-update-4.2 .) and repeat the installation, I can 
reproduce the behavior (install, reboot, every works with the new version, 
reboot, node status: DEGRADED)

Have this behavior on four test servers.


here df -h, ll /boot after the 1st reboot and the output of imgbase layout and 
imgbase w

[root@ovirt-n1 ~]# df -h
DateisystemGröße 
Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1  183G
3,3G  170G2% /
devtmpfs 95G   
0   95G0% /dev
tmpfs95G 
16K   95G1% /dev/shm
tmpfs95G 
42M   95G1% /run
tmpfs95G   
0   95G0% /sys/fs/cgroup
/dev/mapper/onn_ovirt--n1-var15G
187M   14G2% /var
/dev/sda2   976M
417M  492M   46% /boot
/dev/mapper/onn_ovirt--n1-tmp   976M
3,4M  906M1% /tmp
/dev/mapper/onn_ovirt--n1-home  976M
2,6M  907M1% /home
/dev/mapper/onn_ovirt--n1-var_log   7,8G
414M  7,0G6% /var/log
/dev/mapper/onn_ovirt--n1-var_log_audit 2,0G 
39M  1,8G3% /var/log/audit
/dev/mapper/onn_ovirt--n1-var_crash 9,8G 
37M  9,2G1% /var/crash
/dev/sda1   200M
9,8M  191M5% /boot/efi
gluster01.test.visa-ad.at:/st1  805G 
71G  734G9% /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
glustermount:iso 50G 
20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
glustermount:export 100G
4,8G   96G5% /rhev/data-center/mnt/glusterSD/glustermount:export
tmpfs19G   
0   19G0% /run/user/0
[root@ovirt-n1 ~]# ll /boot
insgesamt 187016
-rw-r--r--. 1 root root   140971  8. Mai 10:37 config-3.10.0-693.21.1.el7.x86_64
-rw-r--r--. 1 root root   147859 24. Sep 09:04 config-3.10.0-862.11.6.el7.x86_64
drwx--. 3 root root16384  1. Jan 1970  efi
-rw-r--r--. 1 root root   192572  5. Nov 2016  elf-memtest86+-5.01
drwxr-xr-x. 2 root root 4096  4. Mai 18:34 extlinux
drwxr-xr-x. 2 root root 4096  4. Mai 18:16 grub
drwx--. 5 root root 4096  8. Mai 08:45 grub2
-rw---. 1 root root 59917312  8. Mai 10:39 
initramfs-3.10.0-693.21.1.el7.x86_64.img
-rw---. 1 root root 21026491 11. Jul 12:10 
initramfs-3.10.0-693.21.1.el7.x86_64kdump.img
-rw---. 1 root root 26672143  4. Mai 18:24 
initramfs-3.10.0-693.el7.x86_64.img
-rw---. 1 root root 62740408 24. Sep 09:05 
initramfs-3.10.0-862.11.6.el7.x86_64.img
-rw-r--r--. 1 root root   611296  4. Mai 18:23 initrd-plymouth.img
drwx--. 2 root root16384  8. Mai 10:32 lost+found
-rw-r--r--. 1 root root   190896  5. Nov 2016  memtest86+-5.01
drwxr-xr-x. 2 root r

[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread o . krueckel
Sorry, I have not found a file attachment on this website? So I added the files 
as text in this list!

## 4.2.6
[root@ovirt-n1 boot]# imgbase w
You are on ovirt-node-ng-4.2.6-0.20180903.0+1
[root@ovirt-n1 boot]# cat /proc/cmdline
BOOT_IMAGE=/ovirt-node-ng-4.2.6-0.20180903.0+1/vmlinuz-3.10.0-862.11.6.el7.x86_64
 root=/dev/onn_ovirt-n1/ovirt-node-ng-4.2.6-0.20180903.0+1 ro crashkernel=auto 
rd.lvm.lv=onn_ovirt-n1/swap 
rd.lvm.lv=onn_ovirt-n1/ovirt-node-ng-4.2.6-0.20180903.0+1 rhgb quiet 
LANG=en_US.UTF-8 img.bootid=ovirt-node-ng-4.2.6-0.20180903.0+1
[root@ovirt-n1 boot]#

## 4.2.6.1 after 1. reboot , grub.cfg is the same as 4.2.6
[root@ovirt-n1 ~]# imgbase w
You are on ovirt-node-ng-4.2.6.1-0.20180913.0+1
[root@ovirt-n1 ~]# cat /proc/cmdline
BOOT_IMAGE=/ovirt-node-ng-4.2.6.1-0.20180913.0+1/vmlinuz-3.10.0-862.11.6.el7.x86_64
 root=/dev/onn_ovirt-n1/ovirt-node-ng-4.2.6.1-0.20180913.0+1 ro 
crashkernel=auto rd.lvm.lv=onn_ovirt-n1/swap 
rd.lvm.lv=onn_ovirt-n1/ovirt-node-ng-4.2.6.1-0.20180913.0+1 rhgb quiet 
LANG=en_US.UTF-8 img.bootid=ovirt-node-ng-4.2.6.1-0.20180913.0+1
[root@ovirt-n1 ~]#

## 4.2.6.1 after 2. reboot, grub.cfg is different to 4.2.6.1 after 1. reboot
[root@ovirt-n1 ~]# imgbase w
You are on ovirt-node-ng-4.2.6.1-0.20180913.0+1
[root@ovirt-n1 ~]# cat /proc/cmdline
BOOT_IMAGE=/ovirt-node-ng-4.2.6.1-0.20180913.0+1/vmlinuz-3.10.0-862.11.6.el7.x86_64
 root=/dev/onn_ovirt-n1/ovirt-node-ng-4.2.6.1-0.20180913.0+1 ro 
crashkernel=auto rd.lvm.lv=onn_ovirt-n1/swap 
rd.lvm.lv=onn_ovirt-n1/ovirt-node-ng-4.2.6.1-0.20180913.0+1 rhgb quiet 
LANG=en_US.UTF-8 img.bootid=ovirt-node-ng-4.2.6.1-0.20180913.0+1
[root@ovirt-n1 ~]#

## diff of grub.cfg
:~> diff 1_reboot_ovirt-node-ng-4.2.6.1-0.20180913.0+1_grub.cfg 
2_reboot_ovirt-node-ng-4.2.6.1-0.20180913.0+1_grub.cfg 
90c90
< menuentry 'CentOS Linux (3.10.0-693.21.1.el7.x86_64) 7 (Core)' --class centos 
--class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 
'gnulinux-3.10.0-693.21.1.el7.x86_64-advanced-57ca0a93-12a7-4102-86e4-d4da992dca9b'
 {
---
> menuentry 'CentOS Linux (3.10.0-862.11.6.el7.x86_64) 7 (Core)' --class centos 
> --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 
> 'gnulinux-3.10.0-862.11.6.el7.x86_64-advanced-4f32de64-d218-4467-b8cc-0ffbcc10103e'
>  {
98c98
< search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 
--hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  
57ca0a93-12a7-4102-86e4-d4da992dca9b
---
> search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 
> --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  
> 4f32de64-d218-4467-b8cc-0ffbcc10103e
100c100
< search --no-floppy --fs-uuid --set=root 
57ca0a93-12a7-4102-86e4-d4da992dca9b
---
> search --no-floppy --fs-uuid --set=root 
> 4f32de64-d218-4467-b8cc-0ffbcc10103e
102,103c102,103
<   linux16 /boot/vmlinuz-3.10.0-693.21.1.el7.x86_64 
root=UUID=57ca0a93-12a7-4102-86e4-d4da992dca9b ro crashkernel=auto 
console=ttyS0 LANG=en_US.UTF-8
<   initrd16 /boot/initramfs-3.10.0-693.21.1.el7.x86_64.img
---
>   linux16 /boot/vmlinuz-3.10.0-862.11.6.el7.x86_64 
> root=UUID=4f32de64-d218-4467-b8cc-0ffbcc10103e ro crashkernel=auto 
> console=ttyS0 LANG=en_US.UTF-8
>   initrd16 /boot/initramfs-3.10.0-862.11.6.el7.x86_64.img
109,110c109,110
< submenu "tboot 1.9.5" {
< menuentry 'CentOS Linux GNU/Linux, with tboot 1.9.5 and Linux 
3.10.0-693.21.1.el7.x86_64' --class centos --class gnu-linux --class gnu 
--class os --class tboot {
---
> submenu "tboot 1.9.6" {
> menuentry 'CentOS Linux GNU/Linux, with tboot 1.9.6 and Linux 
> 3.10.0-862.11.6.el7.x86_64' --class centos --class gnu-linux --class gnu 
> --class os --class tboot {
115c115
< search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 
--hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  
57ca0a93-12a7-4102-86e4-d4da992dca9b
---
> search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 
> --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  
> 4f32de64-d218-4467-b8cc-0ffbcc10103e
117c117
< search --no-floppy --fs-uuid --set=root 
57ca0a93-12a7-4102-86e4-d4da992dca9b
---
> search --no-floppy --fs-uuid --set=root 
> 4f32de64-d218-4467-b8cc-0ffbcc10103e
119c119
<   echo'Loading tboot 1.9.5 ...'
---
>   echo'Loading tboot 1.9.6 ...'
121,122c121,122
<   echo'Loading Linux 3.10.0-693.21.1.el7.x86_64 ...'
<   module /boot/vmlinuz-3.10.0-693.21.1.el7.x86_64 
root=UUID=57ca0a93-12a7-4102-86e4-d4da992dca9b ro crashkernel=auto 
console=ttyS0 intel_iommu=on
---
>   echo'Loading Linux 3.10.0-862.11.6.el7.x86_64 ...'
>   module /boot/vmlinuz-3.10.0-862.11.6.el7.x86_64 
> root=UUID=4f32de64-d218-4467-b8cc-0ffbcc10103e ro crashkernel=auto 
> console=ttyS0 intel_iommu=on
124c124
<   module /boot/initramfs-3.10.0-693.21.1.el7.x86_64.img
---
>   module 

[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread Yuval Turgeman
Can you share the output from `cat /proc/cmdline` and perhaps the grub.conf
?
Imgbased adds a bootid and perhaps it's missing for some reason

On Mon, Sep 24, 2018, 11:59 Sandro Bonazzola  wrote:

> Adding some people who may help understanding what happened and work on a
> solution for this.
>
> Il giorno lun 24 set 2018 alle ore 10:30  ha
> scritto:
>
>> Identified this problem for some time (running after about 3. 4th update
>> always in this problem), have always helped me with a new installtion. Now
>> I've looked at it more closely (maybe this information will help the
>> knower).
>>
>> Installation runs without a problem, reboot, system runs as expected,
>> repeated reboot => node status: DEGRADED
>>
>> What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not mount
>> /boot/ and /boot/efi !
>>
>> in dmesg all 3rd partitions are displayed. with parted as well, after
>> partprobe are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or
>> mount /boot/efi does not issue an error, the partionenen however are not
>> mounted (df -h does not show it and umount /boot or /boot/efi says so too).
>>
>> I have the same problem with
>> ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm
>>
>> If I undo the installation (imgbase base
>> --remove=ovirt-node-ng-image-update-4.2 . and yum remove
>> ovirt-node-ng-image-update-4.2 .) and repeat the installation, I can
>> reproduce the behavior (install, reboot, every works with the new version,
>> reboot, node status: DEGRADED)
>>
>> Have this behavior on four test servers.
>>
>>
>> here df -h, ll /boot after the 1st reboot and the output of imgbase
>> layout and imgbase w
>>
>> [root@ovirt-n1 ~]# df -h
>> DateisystemGröße
>> Benutzt Verf. Verw% Eingehängt auf
>> /dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1  183G
>>   3,3G  170G2% /
>> devtmpfs 95G
>>  0   95G0% /dev
>> tmpfs95G
>>16K   95G1% /dev/shm
>> tmpfs95G
>>42M   95G1% /run
>> tmpfs95G
>>  0   95G0% /sys/fs/cgroup
>> /dev/mapper/onn_ovirt--n1-var15G
>>   187M   14G2% /var
>> /dev/sda2   976M
>>   417M  492M   46% /boot
>> /dev/mapper/onn_ovirt--n1-tmp   976M
>>   3,4M  906M1% /tmp
>> /dev/mapper/onn_ovirt--n1-home  976M
>>   2,6M  907M1% /home
>> /dev/mapper/onn_ovirt--n1-var_log   7,8G
>>   414M  7,0G6% /var/log
>> /dev/mapper/onn_ovirt--n1-var_log_audit 2,0G
>>39M  1,8G3% /var/log/audit
>> /dev/mapper/onn_ovirt--n1-var_crash 9,8G
>>37M  9,2G1% /var/crash
>> /dev/sda1   200M
>>   9,8M  191M5% /boot/efi
>> gluster01.test.visa-ad.at:/st1
>> 805G 71G  734G9%
>> /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
>> glustermount:iso 50G
>>20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
>> glustermount:export 100G
>>   4,8G   96G5% /rhev/data-center/mnt/glusterSD/glustermount:export
>> tmpfs19G
>>  0   19G0% /run/user/0
>> [root@ovirt-n1 ~]# ll /boot
>> insgesamt 187016
>> -rw-r--r--. 1 root root   140971  8. Mai 10:37
>> config-3.10.0-693.21.1.el7.x86_64
>> -rw-r--r--. 1 root root   147859 24. Sep 09:04
>> config-3.10.0-862.11.6.el7.x86_64
>> drwx--. 3 root root16384  1. Jan 1970  efi
>> -rw-r--r--. 1 root root   192572  5. Nov 2016  elf-memtest86+-5.01
>> drwxr-xr-x. 2 root root 4096  4. Mai 18:34 extlinux
>> drwxr-xr-x. 2 root root 4096  4. Mai 18:16 grub
>> drwx--. 5 root root 4096  8. Mai 08:45 grub2
>> -rw---. 1 root root 59917312  8. Mai 10:39
>> initramfs-3.10.0-693.21.1.el7.x86_64.img
>> -rw---. 1 root root 21026491 11. Jul 12:10
>> initramfs-3.10.0-693.21.1.el7.x86_64kdump.img
>> -rw---. 1 root root 26672143  4. Mai 18:24
>> initramfs-3.10.0-693.el7.x86_64.img
>> -rw---. 1 root root 62740408 24. Sep 09:05
>> initramfs-3.10.0-862.11.6.el7.x86_64.img
>> -rw-r--r--. 1 root root   611296  4. Mai 18:23 initrd-plymouth.img
>> drwx--. 2 root root16384  8. Mai 10:32 lost+found
>> -rw-r--r--. 1 root root   190896  5. Nov 2016  memtest86+-5.01
>> drwxr-xr-x. 2 root root 4096  8. Mai 10:39
>> ovirt-node-ng-4.2.3-0.20180504.0+1
>> drwxr-xr-x. 2 root root 4096  4. Sep 16:31
>> ovirt-node-ng-4.2.6-0.20180903.0+1
>> 

[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread Sandro Bonazzola
Adding some people who may help understanding what happened and work on a
solution for this.

Il giorno lun 24 set 2018 alle ore 10:30  ha
scritto:

> Identified this problem for some time (running after about 3. 4th update
> always in this problem), have always helped me with a new installtion. Now
> I've looked at it more closely (maybe this information will help the
> knower).
>
> Installation runs without a problem, reboot, system runs as expected,
> repeated reboot => node status: DEGRADED
>
> What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not mount
> /boot/ and /boot/efi !
>
> in dmesg all 3rd partitions are displayed. with parted as well, after
> partprobe are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or
> mount /boot/efi does not issue an error, the partionenen however are not
> mounted (df -h does not show it and umount /boot or /boot/efi says so too).
>
> I have the same problem with
> ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm
>
> If I undo the installation (imgbase base
> --remove=ovirt-node-ng-image-update-4.2 . and yum remove
> ovirt-node-ng-image-update-4.2 .) and repeat the installation, I can
> reproduce the behavior (install, reboot, every works with the new version,
> reboot, node status: DEGRADED)
>
> Have this behavior on four test servers.
>
>
> here df -h, ll /boot after the 1st reboot and the output of imgbase layout
> and imgbase w
>
> [root@ovirt-n1 ~]# df -h
> DateisystemGröße
> Benutzt Verf. Verw% Eingehängt auf
> /dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1  183G
>   3,3G  170G2% /
> devtmpfs 95G
>  0   95G0% /dev
> tmpfs95G
>16K   95G1% /dev/shm
> tmpfs95G
>42M   95G1% /run
> tmpfs95G
>  0   95G0% /sys/fs/cgroup
> /dev/mapper/onn_ovirt--n1-var15G
>   187M   14G2% /var
> /dev/sda2   976M
>   417M  492M   46% /boot
> /dev/mapper/onn_ovirt--n1-tmp   976M
>   3,4M  906M1% /tmp
> /dev/mapper/onn_ovirt--n1-home  976M
>   2,6M  907M1% /home
> /dev/mapper/onn_ovirt--n1-var_log   7,8G
>   414M  7,0G6% /var/log
> /dev/mapper/onn_ovirt--n1-var_log_audit 2,0G
>39M  1,8G3% /var/log/audit
> /dev/mapper/onn_ovirt--n1-var_crash 9,8G
>37M  9,2G1% /var/crash
> /dev/sda1   200M
>   9,8M  191M5% /boot/efi
> gluster01.test.visa-ad.at:/st1  805G
>71G  734G9%
> /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
> glustermount:iso 50G
>20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
> glustermount:export 100G
>   4,8G   96G5% /rhev/data-center/mnt/glusterSD/glustermount:export
> tmpfs19G
>  0   19G0% /run/user/0
> [root@ovirt-n1 ~]# ll /boot
> insgesamt 187016
> -rw-r--r--. 1 root root   140971  8. Mai 10:37
> config-3.10.0-693.21.1.el7.x86_64
> -rw-r--r--. 1 root root   147859 24. Sep 09:04
> config-3.10.0-862.11.6.el7.x86_64
> drwx--. 3 root root16384  1. Jan 1970  efi
> -rw-r--r--. 1 root root   192572  5. Nov 2016  elf-memtest86+-5.01
> drwxr-xr-x. 2 root root 4096  4. Mai 18:34 extlinux
> drwxr-xr-x. 2 root root 4096  4. Mai 18:16 grub
> drwx--. 5 root root 4096  8. Mai 08:45 grub2
> -rw---. 1 root root 59917312  8. Mai 10:39
> initramfs-3.10.0-693.21.1.el7.x86_64.img
> -rw---. 1 root root 21026491 11. Jul 12:10
> initramfs-3.10.0-693.21.1.el7.x86_64kdump.img
> -rw---. 1 root root 26672143  4. Mai 18:24
> initramfs-3.10.0-693.el7.x86_64.img
> -rw---. 1 root root 62740408 24. Sep 09:05
> initramfs-3.10.0-862.11.6.el7.x86_64.img
> -rw-r--r--. 1 root root   611296  4. Mai 18:23 initrd-plymouth.img
> drwx--. 2 root root16384  8. Mai 10:32 lost+found
> -rw-r--r--. 1 root root   190896  5. Nov 2016  memtest86+-5.01
> drwxr-xr-x. 2 root root 4096  8. Mai 10:39
> ovirt-node-ng-4.2.3-0.20180504.0+1
> drwxr-xr-x. 2 root root 4096  4. Sep 16:31
> ovirt-node-ng-4.2.6-0.20180903.0+1
> drwxr-xr-x. 2 root root 4096 24. Sep 09:05
> ovirt-node-ng-4.2.6.1-0.20180913.0+1
> -rw-r--r--. 1 root root   293361  8. Mai 10:37
> symvers-3.10.0-693.21.1.el7.x86_64.gz
> -rw-r--r--. 1 root root   305158 24. Sep 09:04
> symvers-3.10.0-862.11.6.el7.x86_64.gz
> 

[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread o . krueckel
Identified this problem for some time (running after about 3. 4th update always 
in this problem), have always helped me with a new installtion. Now I've looked 
at it more closely (maybe this information will help the knower).

Installation runs without a problem, reboot, system runs as expected, repeated 
reboot => node status: DEGRADED

What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not mount 
/boot/ and /boot/efi !

in dmesg all 3rd partitions are displayed. with parted as well, after partprobe 
are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or mount 
/boot/efi does not issue an error, the partionenen however are not mounted (df 
-h does not show it and umount /boot or /boot/efi says so too).

I have the same problem with 
ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm

If I undo the installation (imgbase base 
--remove=ovirt-node-ng-image-update-4.2 . and yum remove 
ovirt-node-ng-image-update-4.2 .) and repeat the installation, I can 
reproduce the behavior (install, reboot, every works with the new version, 
reboot, node status: DEGRADED)

Have this behavior on four test servers.


here df -h, ll /boot after the 1st reboot and the output of imgbase layout and 
imgbase w

[root@ovirt-n1 ~]# df -h
DateisystemGröße 
Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1  183G
3,3G  170G2% /
devtmpfs 95G   
0   95G0% /dev
tmpfs95G 
16K   95G1% /dev/shm
tmpfs95G 
42M   95G1% /run
tmpfs95G   
0   95G0% /sys/fs/cgroup
/dev/mapper/onn_ovirt--n1-var15G
187M   14G2% /var
/dev/sda2   976M
417M  492M   46% /boot
/dev/mapper/onn_ovirt--n1-tmp   976M
3,4M  906M1% /tmp
/dev/mapper/onn_ovirt--n1-home  976M
2,6M  907M1% /home
/dev/mapper/onn_ovirt--n1-var_log   7,8G
414M  7,0G6% /var/log
/dev/mapper/onn_ovirt--n1-var_log_audit 2,0G 
39M  1,8G3% /var/log/audit
/dev/mapper/onn_ovirt--n1-var_crash 9,8G 
37M  9,2G1% /var/crash
/dev/sda1   200M
9,8M  191M5% /boot/efi
gluster01.test.visa-ad.at:/st1  805G 
71G  734G9% /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
glustermount:iso 50G 
20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
glustermount:export 100G
4,8G   96G5% /rhev/data-center/mnt/glusterSD/glustermount:export
tmpfs19G   
0   19G0% /run/user/0
[root@ovirt-n1 ~]# ll /boot
insgesamt 187016
-rw-r--r--. 1 root root   140971  8. Mai 10:37 config-3.10.0-693.21.1.el7.x86_64
-rw-r--r--. 1 root root   147859 24. Sep 09:04 config-3.10.0-862.11.6.el7.x86_64
drwx--. 3 root root16384  1. Jan 1970  efi
-rw-r--r--. 1 root root   192572  5. Nov 2016  elf-memtest86+-5.01
drwxr-xr-x. 2 root root 4096  4. Mai 18:34 extlinux
drwxr-xr-x. 2 root root 4096  4. Mai 18:16 grub
drwx--. 5 root root 4096  8. Mai 08:45 grub2
-rw---. 1 root root 59917312  8. Mai 10:39 
initramfs-3.10.0-693.21.1.el7.x86_64.img
-rw---. 1 root root 21026491 11. Jul 12:10 
initramfs-3.10.0-693.21.1.el7.x86_64kdump.img
-rw---. 1 root root 26672143  4. Mai 18:24 
initramfs-3.10.0-693.el7.x86_64.img
-rw---. 1 root root 62740408 24. Sep 09:05 
initramfs-3.10.0-862.11.6.el7.x86_64.img
-rw-r--r--. 1 root root   611296  4. Mai 18:23 initrd-plymouth.img
drwx--. 2 root root16384  8. Mai 10:32 lost+found
-rw-r--r--. 1 root root   190896  5. Nov 2016  memtest86+-5.01
drwxr-xr-x. 2 root root 4096  8. Mai 10:39 
ovirt-node-ng-4.2.3-0.20180504.0+1
drwxr-xr-x. 2 root root 4096  4. Sep 16:31 
ovirt-node-ng-4.2.6-0.20180903.0+1
drwxr-xr-x. 2 root root 4096 24. Sep 09:05 
ovirt-node-ng-4.2.6.1-0.20180913.0+1
-rw-r--r--. 1 root root   293361  8. Mai 10:37 
symvers-3.10.0-693.21.1.el7.x86_64.gz
-rw-r--r--. 1 root root   305158 24. Sep 09:04 
symvers-3.10.0-862.11.6.el7.x86_64.gz
-rw---. 1 root root  3237433  8. Mai 10:37 
System.map-3.10.0-693.21.1.el7.x86_64
-rw---. 1 root root  3414344 24. Sep 09:04 
System.map-3.10.0-862.11.6.el7.x86_64
-rw-r--r--. 1 root root   346490  3. Aug 2017  tboot.gz
-rw-r--r--. 1 root root13145  3. Aug 2017  tboot-syms