[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-05 Thread Yuval Turgeman
Hi Oliver,

Sorry we couldn't get this to upgrade, but removing the base layers kinda
killed us - however, we already have some ideas on how to improve imgbased
to make it more friendly :)

Thanks for the update !
Yuval.


On Thu, Jul 5, 2018 at 3:52 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Hi Yuval,
>
> as you can see in my last attachment, after lv meta restore i was unable
> to modify LV's in pool00.
> Thin pool has queued transactions got 23 expect 16 or so.
>
> I reboot and try repairing from Centos 7 USB Stick and can’t access /
> remove LV because they
> has Read LOCK and then Write LOCK is prohibited.
>
> The System boots only into the dracut emergency console and i decide  me
> for reliability
> to reinstall it with a fresh 4.2.4 NODE after cleaning the disk. :-)
>
> Now it running overt-node-ng-4.2.4.
> -
> Noticeable on this Issue is:
> - ng-node should not be installed on previously used CentOS Disks without
> cleaning. (var_crash LV)
> - upgrades eg. 4.2.4 should be easy reinstall-able.
> - What about old version in LV thin pool, how to remove them safely ?
> - fstrim -av trims also LV thin pool volumes, nice :-)
>
> Many thanks to you, i have learned a lot of lvm.
>
> Oliver
>
> > Am 03.07.2018 um 22:58 schrieb Yuval Turgeman :
> >
> > OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1
> still exists without its base - try this:
> >
> > 1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
> > 2. nodectl info
> >
> > On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener <
> oliver.riese...@hs-bremen.de> wrote:
> > I did it, with issues, see attachment.
> >
> >
> >
> >
> >> Am 03.07.2018 um 22:25 schrieb Yuval Turgeman :
> >>
> >> Hi Oliver,
> >>
> >> I would try the following, but please notice it is *very* dangerous, so
> a backup is probably a good idea (man vgcfgrestore)...
> >>
> >> 1. vgcfgrestore --list onn_ovn-monster
> >> 2. search for a .vg file that was created before deleting those 2 lvs
> (ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
> >> 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster
> --force
> >> 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
> >> 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
> >> 6. lvremove the lvs from the thinpool that are not mounted/used
> (var_crash?)
> >> 7. nodectl info to make sure everything is ok
> >> 8. reinstall the image-update rpm
> >>
> >> Thanks,
> >> Yuval.
> >>
> >>
> >>
> >> On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman 
> wrote:
> >> Hi Oliver,
> >>
> >> The KeyError happens because there are no bases for the layers.  For
> each LV that ends with a +1, there should be a base read-only LV without
> +1.  So for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This
> is the reason nodectl info fails, and the upgrade will fail also.  In your
> original email it looks OK - I have never seen this happen, was this a
> manual lvremove ? I need to reproduce this and check what can be done.
> >>
> >> You can find me on #ovirt (irc.oftc.net) also :)
> >>
> >>
> >> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
> oliver.riese...@hs-bremen.de> wrote:
> >> Yuval, here comes the lvs output.
> >>
> >> The IO Errors are because Node is in maintenance.
> >> The LV root is from previous installed centos 7.5.
> >> The i have installed node-ng 4.2.1 and got this MIX.
> >> The LV turbo is a SSD in it’s own VG named ovirt.
> >>
> >> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
> >> because nodectl info error:
> >>
> >> KeyError:  >>
> >> Now i get the error @4.2.3:
> >> [root@ovn-monster ~]# nodectl info
> >> Traceback (most recent call last):
> >>   File "/usr/lib64/python2.7/runpy.py", line 162, in
> _run_module_as_main
> >> "__main__", fname, loader, pkg_name)
> >>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> >> exec code in run_globals
> >>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line
> 42, in 
> >> CliApplication()
> >>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
> 200, in CliApplication
> >> return cmdmap.command(args)
> >>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
> 118, in command
> >> return self.commands[command](**kwargs)
> >>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
> 76, in info
> >> Info(self.imgbased, self.machine).write()
> >>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
> __init__
> >> self._fetch_information()
> >>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
> _fetch_information
> >> self._get_layout()
> >>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
> _get_layout
> >> layout = LayoutParser(self.app.imgbase.layout()).parse()
> >>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line
> 155, in layout
> >> return self.naming.layout()
> 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-05 Thread Oliver Riesener
Hi Yuval,

as you can see in my last attachment, after lv meta restore i was unable to 
modify LV's in pool00.
Thin pool has queued transactions got 23 expect 16 or so.

I reboot and try repairing from Centos 7 USB Stick and can’t access / remove LV 
because they
has Read LOCK and then Write LOCK is prohibited.

The System boots only into the dracut emergency console and i decide  me for 
reliability 
to reinstall it with a fresh 4.2.4 NODE after cleaning the disk. :-)

Now it running overt-node-ng-4.2.4.
-
Noticeable on this Issue is:
- ng-node should not be installed on previously used CentOS Disks without 
cleaning. (var_crash LV)
- upgrades eg. 4.2.4 should be easy reinstall-able.
- What about old version in LV thin pool, how to remove them safely ?
- fstrim -av trims also LV thin pool volumes, nice :-)

Many thanks to you, i have learned a lot of lvm.

Oliver

> Am 03.07.2018 um 22:58 schrieb Yuval Turgeman :
> 
> OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1 
> still exists without its base - try this:
> 
> 1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
> 2. nodectl info
> 
> On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener 
>  wrote:
> I did it, with issues, see attachment.
> 
> 
> 
> 
>> Am 03.07.2018 um 22:25 schrieb Yuval Turgeman :
>> 
>> Hi Oliver,
>> 
>> I would try the following, but please notice it is *very* dangerous, so a 
>> backup is probably a good idea (man vgcfgrestore)...
>> 
>> 1. vgcfgrestore --list onn_ovn-monster 
>> 2. search for a .vg file that was created before deleting those 2 lvs 
>> (ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
>> 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force
>> 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
>> 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
>> 6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?)
>> 7. nodectl info to make sure everything is ok
>> 8. reinstall the image-update rpm
>> 
>> Thanks,
>> Yuval.
>> 
>> 
>> 
>> On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman  wrote:
>> Hi Oliver, 
>> 
>> The KeyError happens because there are no bases for the layers.  For each LV 
>> that ends with a +1, there should be a base read-only LV without +1.  So for 
>> 3 ovirt-node-ng images, you're supposed to have 6 layers.  This is the 
>> reason nodectl info fails, and the upgrade will fail also.  In your original 
>> email it looks OK - I have never seen this happen, was this a manual 
>> lvremove ? I need to reproduce this and check what can be done.
>> 
>> You can find me on #ovirt (irc.oftc.net) also :)
>> 
>> 
>> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener 
>>  wrote:
>> Yuval, here comes the lvs output.
>> 
>> The IO Errors are because Node is in maintenance.
>> The LV root is from previous installed centos 7.5.
>> The i have installed node-ng 4.2.1 and got this MIX.
>> The LV turbo is a SSD in it’s own VG named ovirt.
>> 
>> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
>> because nodectl info error:
>> 
>> KeyError: > 
>> Now i get the error @4.2.3:
>> [root@ovn-monster ~]# nodectl info
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>> "__main__", fname, loader, pkg_name)
>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>> exec code in run_globals
>>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in 
>> 
>> CliApplication()
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in 
>> CliApplication
>> return cmdmap.command(args)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in 
>> command
>> return self.commands[command](**kwargs)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in 
>> info
>> Info(self.imgbased, self.machine).write()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in 
>> __init__
>> self._fetch_information()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in 
>> _fetch_information
>> self._get_layout()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in 
>> _get_layout
>> layout = LayoutParser(self.app.imgbase.layout()).parse()
>>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in 
>> layout
>> return self.naming.layout()
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in 
>> layout
>> tree = self.tree(lvs)
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in 
>> tree
>> bases[img.base.nvr].layers.append(img)
>> KeyError: 
>> 
>> lvs -a
>> 
>> [root@ovn-monster ~]# lvs -a
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 
>> at 0: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of 4096 
>> at 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed (with solution)

2018-07-03 Thread Matt Simonsen

Many thanks to Yuval.

After moving the discussion to #ovirt, I tried "fstrim -a" and this 
allowed the upgrade to complete successfully.


Matt







On 07/03/2018 12:19 PM, Yuval Turgeman wrote:

Hi Matt,

I would try to run `fstrim -a` (man fstrim) and see if it frees 
anything from the thinpool.  If you do decide to run this, please send 
the output for lvs again.


Also, are you on #ovirt ?

Thanks,
Yuval.


On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen > wrote:


Thank you again for the assistance with this issue.

Below is the result of the command below.

In the future I am considering using different Logical RAID
Volumes to get different devices (sda, sdb, etc) for the oVirt
Node image & storage filesystem to simplify.  However I'd like to
understand why this upgrade failed and also how to correct it if
at all possible.

I believe I need to recreate the /var/crash partition? I
incorrectly removed it, is it simply a matter of using LVM to add
a new partition and format it?

Secondly, do you have any suggestions on how to move forward with
the error regarding the pool capacity? I'm not sure if this is a
legitimate error or problem in the upgrade process.

Thanks,

Matt




On 07/03/2018 03:58 AM, Yuval Turgeman wrote:

Not sure this is the problem, autoextend should be enabled for
the thinpool, `lvs -o +profile` should show imgbased-pool
(defined at /etc/lvm/profile/imgbased-pool.profile)

On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David
mailto:d...@redhat.com>> wrote:

On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen mailto:m...@khoza.com>> wrote:
>
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue
given I have several hundred GB of storage in the thin pool
that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV  VG              Attr       LSize   Pool  Origin     
                       Data% Meta%  Move Log Cpy%Sync Convert
>   home  onn_node1-g8-h4 Vwi-aotz--   1.00g pool00          
                        4.79
>   ovirt-node-ng-4.2.2-0.20180423.0    onn_node1-g8-h4
Vwi---tz-k <50.06g pool00 root
>   ovirt-node-ng-4.2.2-0.20180423.0+1  onn_node1-g8-h4
Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0  onn_node1-g8-h4
Vri---tz-k <50.06g pool00
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4
Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>   pool00  onn_node1-g8-h4 twi-aotz--  <1.30t              
                       76.63 50.34

I think your thinpool meta volume is close to full and needs
to be enlarged.
This quite likely happened because you extended the thinpool
without
extending the meta vol.

Check also 'lvs -a'.

This might be enough, but check the names first:

lvextend -L+200m onn_node1-g8-h4/pool00_tmeta

Best regards,

>   root  onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
>   tmp   onn_node1-g8-h4 Vwi-aotz--   1.00g pool00 5.04
>   var   onn_node1-g8-h4 Vwi-aotz--  15.00g pool00 5.86
>   var_crash   onn_node1-g8-h4 Vwi---tz--  10.00g pool00
>   var_local_images  onn_node1-g8-h4 Vwi-aotz--   1.10t
pool00 89.72
>   var_log   onn_node1-g8-h4 Vwi-aotz--   8.00g pool00 6.84
>   var_log_audit   onn_node1-g8-h4 Vwi-aotz--   2.00g pool00
6.16
> [root@node6-g8-h4 ~]# vgs
>   VG              #PV #LV #SN Attr  VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version:
imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:

Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
command='update', debug=True, experimental=False,
format='liveimg', stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting
image

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling
binary: (['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {'close_fds':
True, 'stderr': -2}
> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned:
/tmp/mnt.1OhaU
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling
binary: (['mount',

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1
still exists without its base - try this:

1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
2. nodectl info

On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> I did it, with issues, see attachment.
>
>
>
>
> Am 03.07.2018 um 22:25 schrieb Yuval Turgeman :
>
> Hi Oliver,
>
> I would try the following, but please notice it is *very* dangerous, so a
> backup is probably a good idea (man vgcfgrestore)...
>
> 1. vgcfgrestore --list onn_ovn-monster
> 2. search for a .vg file that was created before deleting those 2 lvs (
> ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
> 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force
> 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
> 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
> 6. lvremove the lvs from the thinpool that are not mounted/used
> (var_crash?)
> 7. nodectl info to make sure everything is ok
> 8. reinstall the image-update rpm
>
> Thanks,
> Yuval.
>
>
>
> On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman 
> wrote:
>
>> Hi Oliver,
>>
>> The KeyError happens because there are no bases for the layers.  For each
>> LV that ends with a +1, there should be a base read-only LV without +1.  So
>> for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This is the
>> reason nodectl info fails, and the upgrade will fail also.  In your
>> original email it looks OK - I have never seen this happen, was this a
>> manual lvremove ? I need to reproduce this and check what can be done.
>>
>> You can find me on #ovirt (irc.oftc.net) also :)
>>
>>
>> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
>> oliver.riese...@hs-bremen.de> wrote:
>>
>>> Yuval, here comes the lvs output.
>>>
>>> The IO Errors are because Node is in maintenance.
>>> The LV root is from previous installed centos 7.5.
>>> The i have installed node-ng 4.2.1 and got this MIX.
>>> The LV turbo is a SSD in it’s own VG named ovirt.
>>>
>>> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
>>> because nodectl info error:
>>>
>>> KeyError: >>
>>> Now i get the error @4.2.3:
>>> [root@ovn-monster ~]# nodectl info
>>> Traceback (most recent call last):
>>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>>> "__main__", fname, loader, pkg_name)
>>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>>> exec code in run_globals
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
>>> in 
>>> CliApplication()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
>>> 200, in CliApplication
>>> return cmdmap.command(args)
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
>>> 118, in command
>>> return self.commands[command](**kwargs)
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
>>> in info
>>> Info(self.imgbased, self.machine).write()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
>>> __init__
>>> self._fetch_information()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
>>> _fetch_information
>>> self._get_layout()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
>>> _get_layout
>>> layout = LayoutParser(self.app.imgbase.layout()).parse()
>>>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line
>>> 155, in layout
>>> return self.naming.layout()
>>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
>>> in layout
>>> tree = self.tree(lvs)
>>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
>>> in tree
>>> bases[img.base.nvr].layers.append(img)
>>> KeyError: 
>>>
>>> lvs -a
>>>
>>> [root@ovn-monster ~]# lvs -a
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 0: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 5497568559104: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 5497568616448: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 4096: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>>> 4096 at 0: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>>> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>>> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>>> 4096 at 4096: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>>> 4096 at 0: Eingabe-/Ausgabefehler
>>>   

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Hi Oliver,

I would try the following, but please notice it is *very* dangerous, so a
backup is probably a good idea (man vgcfgrestore)...

1. vgcfgrestore --list onn_ovn-monster
2. search for a .vg file that was created before deleting those 2 lvs (
ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force
4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?)
7. nodectl info to make sure everything is ok
8. reinstall the image-update rpm

Thanks,
Yuval.



On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman  wrote:

> Hi Oliver,
>
> The KeyError happens because there are no bases for the layers.  For each
> LV that ends with a +1, there should be a base read-only LV without +1.  So
> for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This is the
> reason nodectl info fails, and the upgrade will fail also.  In your
> original email it looks OK - I have never seen this happen, was this a
> manual lvremove ? I need to reproduce this and check what can be done.
>
> You can find me on #ovirt (irc.oftc.net) also :)
>
>
> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
> oliver.riese...@hs-bremen.de> wrote:
>
>> Yuval, here comes the lvs output.
>>
>> The IO Errors are because Node is in maintenance.
>> The LV root is from previous installed centos 7.5.
>> The i have installed node-ng 4.2.1 and got this MIX.
>> The LV turbo is a SSD in it’s own VG named ovirt.
>>
>> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
>> because nodectl info error:
>>
>> KeyError: >
>> Now i get the error @4.2.3:
>> [root@ovn-monster ~]# nodectl info
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>> "__main__", fname, loader, pkg_name)
>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>> exec code in run_globals
>>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
>> in 
>> CliApplication()
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
>> in CliApplication
>> return cmdmap.command(args)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
>> in command
>> return self.commands[command](**kwargs)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
>> in info
>> Info(self.imgbased, self.machine).write()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
>> __init__
>> self._fetch_information()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
>> _fetch_information
>> self._get_layout()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
>> _get_layout
>> layout = LayoutParser(self.app.imgbase.layout()).parse()
>>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155,
>> in layout
>> return self.naming.layout()
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
>> in layout
>> tree = self.tree(lvs)
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
>> in tree
>> bases[img.base.nvr].layers.append(img)
>> KeyError: 
>>
>> lvs -a
>>
>> [root@ovn-monster ~]# lvs -a
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 0: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 5497568559104: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 5497568616448: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 4096: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 0: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 4096: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>> 4096 at 0: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>> 4096 at 4096: Eingabe-/Ausgabefehler
>>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after
>> 0 of 4096 at 0: Eingabe-/Ausgabefehler
>>   

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Hi Oliver,

The KeyError happens because there are no bases for the layers.  For each
LV that ends with a +1, there should be a base read-only LV without +1.  So
for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This is the
reason nodectl info fails, and the upgrade will fail also.  In your
original email it looks OK - I have never seen this happen, was this a
manual lvremove ? I need to reproduce this and check what can be done.

You can find me on #ovirt (irc.oftc.net) also :)


On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Yuval, here comes the lvs output.
>
> The IO Errors are because Node is in maintenance.
> The LV root is from previous installed centos 7.5.
> The i have installed node-ng 4.2.1 and got this MIX.
> The LV turbo is a SSD in it’s own VG named ovirt.
>
> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
> because nodectl info error:
>
> KeyError: 
> Now i get the error @4.2.3:
> [root@ovn-monster ~]# nodectl info
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
> in 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
> in CliApplication
> return cmdmap.command(args)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
> in command
> return self.commands[command](**kwargs)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
> in info
> Info(self.imgbased, self.machine).write()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
> __init__
> self._fetch_information()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
> _fetch_information
> self._get_layout()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
> _get_layout
> layout = LayoutParser(self.app.imgbase.layout()).parse()
>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155,
> in layout
> return self.naming.layout()
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
> in layout
> tree = self.tree(lvs)
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
> in tree
> bases[img.base.nvr].layers.append(img)
> KeyError: 
>
> lvs -a
>
> [root@ovn-monster ~]# lvs -a
>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
> 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
> 4096 at 5497568559104: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
> 4096 at 5497568616448: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
> 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
> 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
> 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
> 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
> 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0
> of 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0
> of 4096 at 536805376: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0
> of 4096 at 536862720: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0
> of 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of
> 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of
> 4096 at 134152192: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of
> 4096 at 134209536: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of
> 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0
> of 4096 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Hi Matt,

I would try to run `fstrim -a` (man fstrim) and see if it frees anything
from the thinpool.  If you do decide to run this, please send the output
for lvs again.

Also, are you on #ovirt ?

Thanks,
Yuval.


On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen  wrote:

> Thank you again for the assistance with this issue.
>
> Below is the result of the command below.
>
> In the future I am considering using different Logical RAID Volumes to get
> different devices (sda, sdb, etc) for the oVirt Node image & storage
> filesystem to simplify.  However I'd like to understand why this upgrade
> failed and also how to correct it if at all possible.
>
> I believe I need to recreate the /var/crash partition? I incorrectly
> removed it, is it simply a matter of using LVM to add a new partition and
> format it?
>
> Secondly, do you have any suggestions on how to move forward with the
> error regarding the pool capacity? I'm not sure if this is a legitimate
> error or problem in the upgrade process.
>
> Thanks,
>
> Matt
>
>
>
>
> On 07/03/2018 03:58 AM, Yuval Turgeman wrote:
>
> Not sure this is the problem, autoextend should be enabled for the
> thinpool, `lvs -o +profile` should show imgbased-pool (defined at
> /etc/lvm/profile/imgbased-pool.profile)
>
> On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David 
> wrote:
>
>> On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen  wrote:
>> >
>> > This error adds some clarity.
>> >
>> > That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>> >
>> > How do you suggest I proceed?
>> >
>> > Thank you for your help,
>> >
>> > Matt
>> >
>> >
>> >
>> > [root@node6-g8-h4 ~]# lvs
>> >
>> >   LV   VG  Attr
>>  LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>> >   home onn_node1-g8-h4 Vwi-aotz--
>>  1.00g pool004.79
>> >   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>> >   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>> >   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>> >   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>> >   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63  50.34
>>
>> I think your thinpool meta volume is close to full and needs to be
>> enlarged.
>> This quite likely happened because you extended the thinpool without
>> extending the meta vol.
>>
>> Check also 'lvs -a'.
>>
>> This might be enough, but check the names first:
>>
>> lvextend -L+200m onn_node1-g8-h4/pool00_tmeta
>>
>> Best regards,
>>
>> >   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>> >   tmp  onn_node1-g8-h4 Vwi-aotz--
>>  1.00g pool005.04
>> >   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool005.86
>> >   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>> >   var_local_images onn_node1-g8-h4 Vwi-aotz--
>>  1.10t pool0089.72
>> >   var_log  onn_node1-g8-h4 Vwi-aotz--
>>  8.00g pool006.84
>> >   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>>  2.00g pool006.16
>> > [root@node6-g8-h4 ~]# vgs
>> >   VG  #PV #LV #SN Attr   VSize  VFree
>> >   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>> >
>> >
>> > 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> > 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-
>> node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> > 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180
>> 626.0.el7.squashfs.img'
>> > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary:
>> (['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
>> > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> > 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Matt Simonsen

Thank you again for the assistance with this issue.

Below is the result of the command below.

In the future I am considering using different Logical RAID Volumes to 
get different devices (sda, sdb, etc) for the oVirt Node image & storage 
filesystem to simplify.  However I'd like to understand why this upgrade 
failed and also how to correct it if at all possible.


I believe I need to recreate the /var/crash partition? I incorrectly 
removed it, is it simply a matter of using LVM to add a new partition 
and format it?


Secondly, do you have any suggestions on how to move forward with the 
error regarding the pool capacity? I'm not sure if this is a legitimate 
error or problem in the upgrade process.


Thanks,

Matt




On 07/03/2018 03:58 AM, Yuval Turgeman wrote:
Not sure this is the problem, autoextend should be enabled for the 
thinpool, `lvs -o +profile` should show imgbased-pool (defined at 
/etc/lvm/profile/imgbased-pool.profile)


On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David > wrote:


On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen mailto:m...@khoza.com>> wrote:
>
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue given
I have several hundred GB of storage in the thin pool that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV                                   VG   Attr       LSize 
 Pool   Origin      Data%  Meta%  Move Log Cpy%Sync Convert
>   home  onn_node1-g8-h4 Vwi-aotz--   1.00g pool00              
      4.79
>   ovirt-node-ng-4.2.2-0.20180423.0  onn_node1-g8-h4 Vwi---tz-k
<50.06g pool00 root
>   ovirt-node-ng-4.2.2-0.20180423.0+1  onn_node1-g8-h4 Vwi---tz--
<50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0  onn_node1-g8-h4 Vri---tz-k
<50.06g pool00
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4
Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>   pool00  onn_node1-g8-h4 twi-aotz--  <1.30t                    
   76.63  50.34

I think your thinpool meta volume is close to full and needs to be
enlarged.
This quite likely happened because you extended the thinpool without
extending the meta vol.

Check also 'lvs -a'.

This might be enough, but check the names first:

lvextend -L+200m onn_node1-g8-h4/pool00_tmeta

Best regards,

>   root  onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
>   tmp onn_node1-g8-h4 Vwi-aotz--   1.00g pool00                
    5.04
>   var onn_node1-g8-h4 Vwi-aotz--  15.00g pool00                
    5.86
>   var_crash onn_node1-g8-h4 Vwi---tz--  10.00g pool00
>   var_local_images  onn_node1-g8-h4 Vwi-aotz--   1.10t pool00  
                    89.72
>   var_log onn_node1-g8-h4 Vwi-aotz--   8.00g pool00            
        6.84
>   var_log_audit onn_node1-g8-h4 Vwi-aotz--   2.00g pool00      
              6.16
> [root@node6-g8-h4 ~]# vgs
>   VG              #PV #LV #SN Attr   VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version:
imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:

Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
command='update', debug=True, experimental=False,
format='liveimg', stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {'close_fds': True,
'stderr': -2}
> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned:
/tmp/mnt.1OhaU
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary:
(['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {}
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
'/tmp/mnt.1OhaU/LiveOS/rootfs.img'
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling:
(['mktemp', '-d', '--tmpdir', 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Oliver, can you share the output from lvs ?

On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Hi Yuval,
>
> * reinstallation failed, because LV already exists.
>   ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k
> <252,38g pool00  0,85
>   ovirt-node-ng-4.2.4-0.20180626.0+1   onn_ovn-monster Vwi-a-tz--
> <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
> See attachment imgbased.reinstall.log
>
> * I removed them and re-reinstall without luck.
>
> I got KeyError: 
>
> See attachment imgbased.rereinstall.log
>
> Also a new problem with nodectl info
> [root@ovn-monster tmp]# nodectl info
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
> in 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
> in CliApplication
> return cmdmap.command(args)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
> in command
> return self.commands[command](**kwargs)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
> in info
> Info(self.imgbased, self.machine).write()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
> __init__
> self._fetch_information()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
> _fetch_information
> self._get_layout()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
> _get_layout
> layout = LayoutParser(self.app.imgbase.layout()).parse()
>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155,
> in layout
> return self.naming.layout()
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
> in layout
> tree = self.tree(lvs)
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
> in tree
> bases[img.base.nvr].layers.append(img)
> KeyError: 
>
>
>
>
>
>
> Am 02.07.2018 um 22:22 schrieb Oliver Riesener <
> oliver.riese...@hs-bremen.de>:
>
> Hi Yuval,
>
> yes you are right, there was a unused and deactivated var_crash LV.
>
> * I activated and mount it to /var/crash via /etc/fstab.
> * /var/crash was empty, and LV has already ext4 fs.
>   var_crashonn_ovn-monster Vwi-aotz--   10,00g
> pool002,86
>
>
> * Now i will try to upgrade again.
>   * yum reinstall ovirt-node-ng-image-update.noarch
>
> BTW, no more imgbased.log files found.
>
> Am 02.07.2018 um 20:57 schrieb Yuval Turgeman :
>
> From your log:
>
> AssertionError: Path is already a volume: /var/crash
>
> Basically, it means that you already have an LV for /var/crash but it's
> not mounted for some reason, so either mount it (if the data good) or
> remove it and then reinstall the image-update rpm.  Before that, check that
> you dont have any other LVs in that same state - or you can post the output
> for lvs... btw, do you have any more imgbased.log files laying around ?
>
> You can find more details about this here:
>
> https://access.redhat.com/documentation/en-us/red_hat_
> virtualization/4.1/html/upgrade_guide/recovering_from_
> failed_nist-800_upgrade
>
> On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener  bremen.de> wrote:
>
>> Hi,
>>
>> i attached my /tmp/imgbased.log
>>
>> Sheers
>>
>> Oliver
>>
>>
>>
>> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman :
>>
>> Looks like the upgrade script failed - can you please attach
>> /var/log/imgbased.log or /tmp/imgbased.log ?
>>
>> Thanks,
>> Yuval.
>>
>> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
>> wrote:
>>
>>> Yuval, can you please have a look?
>>>
>>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener 
>>> :
>>>
 Yes, here is the same.

 It seams the bootloader isn’t configured right ?

 I did the Upgrade and reboot to 4.2.4 from UI and got:

 [root@ovn-monster ~]# nodectl info
 layers:
   ovirt-node-ng-4.2.4-0.20180626.0:
 ovirt-node-ng-4.2.4-0.20180626.0+1
   ovirt-node-ng-4.2.3.1-0.20180530.0:
 ovirt-node-ng-4.2.3.1-0.20180530.0+1
   ovirt-node-ng-4.2.3-0.20180524.0:
 ovirt-node-ng-4.2.3-0.20180524.0+1
   ovirt-node-ng-4.2.1.1-0.20180223.0:
 ovirt-node-ng-4.2.1.1-0.20180223.0+1
 bootloader:
   default: ovirt-node-ng-4.2.3-0.20180524.0+1
   entries:
 ovirt-node-ng-4.2.3-0.20180524.0+1:
   index: 0
   title: ovirt-node-ng-4.2.3-0.20180524.0
   kernel: /boot/ovirt-node-ng-4.2.3-0.20
 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
   args: "ro crashkernel=auto rd.lvm.lv=
 onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 
 rd.lvm.lv=onn_ovn-monster/swap
 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Not sure this is the problem, autoextend should be enabled for the
thinpool, `lvs -o +profile` should show imgbased-pool (defined at
/etc/lvm/profile/imgbased-pool.profile)

On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David  wrote:

> On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen  wrote:
> >
> > This error adds some clarity.
> >
> > That said, I'm a bit unsure how the space can be the issue given I have
> several hundred GB of storage in the thin pool that's unused...
> >
> > How do you suggest I proceed?
> >
> > Thank you for your help,
> >
> > Matt
> >
> >
> >
> > [root@node6-g8-h4 ~]# lvs
> >
> >   LV   VG  Attr   LSize
>  Pool   Origin Data%  Meta%  Move Log Cpy%Sync
> Convert
> >   home onn_node1-g8-h4 Vwi-aotz--
>  1.00g pool004.79
> >   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
> <50.06g pool00 root
> >   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
> >   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
> <50.06g pool00
> >   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
> >   pool00   onn_node1-g8-h4 twi-aotz--
> <1.30t   76.63  50.34
>
> I think your thinpool meta volume is close to full and needs to be
> enlarged.
> This quite likely happened because you extended the thinpool without
> extending the meta vol.
>
> Check also 'lvs -a'.
>
> This might be enough, but check the names first:
>
> lvextend -L+200m onn_node1-g8-h4/pool00_tmeta
>
> Best regards,
>
> >   root onn_node1-g8-h4 Vwi---tz--
> <50.06g pool00
> >   tmp  onn_node1-g8-h4 Vwi-aotz--
>  1.00g pool005.04
> >   var  onn_node1-g8-h4 Vwi-aotz--
> 15.00g pool005.86
> >   var_crashonn_node1-g8-h4 Vwi---tz--
> 10.00g pool00
> >   var_local_images onn_node1-g8-h4 Vwi-aotz--
>  1.10t pool0089.72
> >   var_log  onn_node1-g8-h4 Vwi-aotz--
>  8.00g pool006.84
> >   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>  2.00g pool006.16
> > [root@node6-g8-h4 ~]# vgs
> >   VG  #PV #LV #SN Attr   VSize  VFree
> >   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
> >
> >
> > 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
> > 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> command='update', debug=True, experimental=False, format='liveimg',
> stream='Image')
> > 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
> 20180626.0.el7.squashfs.img'
> > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
> '-d', '--tmpdir', 'mnt.X'],) {}
> > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
> > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> u'/tmp/mnt.1OhaU'],) {}
> > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
> > 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
> > 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
> > 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
> '-d', '--tmpdir', 'mnt.X'],) {}
> > 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
> > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
> > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
> True, 'stderr': -2}
> > 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
> > 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr:
> ovirt-node-ng-4.2.4-0.20180626.0
> > 2018-06-29 14:19:31,189 [DEBUG] 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yedidyah Bar David
On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen  wrote:
>
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue given I have 
> several hundred GB of storage in the thin pool that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV   VG  Attr   LSize   
> Pool   Origin Data%  Meta%  Move Log Cpy%Sync 
> Convert
>   home onn_node1-g8-h4 Vwi-aotz--   1.00g 
> pool004.79
>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g 
> pool00 root
>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz-- <50.06g 
> pool00 ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k <50.06g 
> pool00
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g 
> pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>   pool00   onn_node1-g8-h4 twi-aotz--  <1.30t 
>   76.63  50.34

I think your thinpool meta volume is close to full and needs to be enlarged.
This quite likely happened because you extended the thinpool without
extending the meta vol.

Check also 'lvs -a'.

This might be enough, but check the names first:

lvextend -L+200m onn_node1-g8-h4/pool00_tmeta

Best regards,

>   root onn_node1-g8-h4 Vwi---tz-- <50.06g 
> pool00
>   tmp  onn_node1-g8-h4 Vwi-aotz--   1.00g 
> pool005.04
>   var  onn_node1-g8-h4 Vwi-aotz--  15.00g 
> pool005.86
>   var_crashonn_node1-g8-h4 Vwi---tz--  10.00g 
> pool00
>   var_local_images onn_node1-g8-h4 Vwi-aotz--   1.10t 
> pool0089.72
>   var_log  onn_node1-g8-h4 Vwi-aotz--   8.00g 
> pool006.84
>   var_log_auditonn_node1-g8-h4 Vwi-aotz--   2.00g 
> pool006.16
> [root@node6-g8-h4 ~]# vgs
>   VG  #PV #LV #SN Attr   VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: 
> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>  command='update', debug=True, experimental=False, format='liveimg', 
> stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image 
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', 
> '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', 
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', 
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>  u'/tmp/mnt.1OhaU'],) {}
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', 
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>  u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at 
> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', 
> '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', 
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', 
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', 
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 
> 'stderr': -2}
> 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: 
> ovirt-node-ng-4.2.4-0.20180626.0
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/'
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', 
> '--noheadings', '-o', 'SOURCE', '/'],) {}
> 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', 
> '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,203 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Oliver Riesener
Hi Yuval,* reinstallation failed, because LV already exists.  ovirt-node-ng-4.2.4-0.20180626.0     onn_ovn-monster Vri-a-tz-k <252,38g pool00                                  0,85  ovirt-node-ng-4.2.4-0.20180626.0+1   onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log* I removed them and re-reinstall without luck.I got KeyError: See attachment imgbased.rereinstall.logAlso a new problem with nodectl info[root@ovn-monster tmp]# nodectl infoTraceback (most recent call last):  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main    "__main__", fname, loader, pkg_name)  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code    exec code in run_globals  File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in     CliApplication()  File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication    return cmdmap.command(args)  File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command    return self.commands[command](**kwargs)  File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info    Info(self.imgbased, self.machine).write()  File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__    self._fetch_information()  File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information    self._get_layout()  File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout    layout = LayoutParser(self.app.imgbase.layout()).parse()  File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout    return self.naming.layout()  File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout    tree = self.tree(lvs)  File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree    bases[img.base.nvr].layers.append(img)KeyError: 

imgbased.log.reinstall.gz
Description: GNU Zip compressed data


imgbased.log.rereinstall.gz
Description: GNU Zip compressed data
Am 02.07.2018 um 22:22 schrieb Oliver Riesener :Hi Yuval,yes you are right, there was a unused and deactivated var_crash LV.* I activated and mount it to /var/crash via /etc/fstab.* /var/crash was empty, and LV has already ext4 fs.  var_crash                            onn_ovn-monster Vwi-aotz--   10,00g pool00                                    2,86                                   * Now i will try to upgrade again.  * yum reinstall ovirt-node-ng-image-update.noarchBTW, no more imgbased.log files found.Am 02.07.2018 um 20:57 schrieb Yuval Turgeman :From your log: AssertionError: Path is already a volume: /var/crashBasically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm.  Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?You can find more details about this here:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgradeOn Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener  wrote:Hi, i attached my /tmp/imgbased.logSheersOliverAm 02.07.2018 um 13:58 schrieb Yuval Turgeman :Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?Thanks,Yuval.On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola  wrote:Yuval, can you please have a look?2018-06-30 7:48 GMT+02:00 Oliver Riesener :Yes, here is the same.It seams the bootloader isn’t configured right ? I did the Upgrade and reboot to 4.2.4 from UI and got:[root@ovn-monster ~]# nodectl infolayers:   ovirt-node-ng-4.2.4-0.20180626.0:     ovirt-node-ng-4.2.4-0.20180626.0+1  ovirt-node-ng-4.2.3.1-0.20180530.0:     ovirt-node-ng-4.2.3.1-0.20180530.0+1  ovirt-node-ng-4.2.3-0.20180524.0:     ovirt-node-ng-4.2.3-0.20180524.0+1  ovirt-node-ng-4.2.1.1-0.20180223.0:     ovirt-node-ng-4.2.1.1-0.20180223.0+1bootloader:   default: ovirt-node-ng-4.2.3-0.20180524.0+1  entries:     ovirt-node-ng-4.2.3-0.20180524.0+1:       index: 0      title: ovirt-node-ng-4.2.3-0.20180524.0      kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64      args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"      initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img      root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1    ovirt-node-ng-4.2.1.1-0.20180223.0+1:       index: 1      title: 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Matt Simonsen

On 07/02/2018 12:55 PM, Yuval Turgeman wrote:

Are you mounted with discard ? perhaps fstrim ?





I believe that I have all the default options, and I have one extra 
partition for images.



#
# /etc/fstab
# Created by anaconda on Sat Oct 31 18:04:29 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1 / ext4 
defaults,discard 1 1

UUID=84ca8776-61d6-4b19-9104-99730932b45a /boot ext4    defaults    1 2
/dev/mapper/onn_node1--g8--h4-home /home ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-tmp /tmp ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-var /var ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-var_local_images /var/local/images   
ext4    defaults    1 2

/dev/mapper/onn_node1--g8--h4-var_log /var/log ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-var_log_audit /var/log/audit ext4 
defaults,discard 1 2



At this point I don't have a /var/crash mounted (or a LV even).  I 
assume I should re-create.



I noticed on another server with the same problem, the var_crash LV 
isn't available.  Could this be part of the problem?


  --- Logical volume ---
  LV Path    /dev/onn/var_crash
  LV Name    var_crash
  VG Name    onn
  LV UUID    X1TPMZ-XeZP-DGYv-woZW-3kvk-vWZu-XQcFhL
  LV Write Access    read/write
  LV Creation host, time node1-g7-h1.srihosting.com, 2018-04-05 
07:03:35 -0700

  LV Pool name   pool00
  LV Status  NOT available
  LV Size    10.00 GiB
  Current LE 2560
  Segments   1
  Allocation inherit
  Read ahead sectors auto



Thanks
Matt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WAV667HP5HU6IXGJTLZQ6YSMHSHTHF6M/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Oliver Riesener
Hi Yuval,

yes you are right, there was a unused and deactivated var_crash LV.

* I activated and mount it to /var/crash via /etc/fstab.
* /var/crash was empty, and LV has already ext4 fs.
  var_crashonn_ovn-monster Vwi-aotz--   10,00g 
pool002,86  
 

* Now i will try to upgrade again.
  * yum reinstall ovirt-node-ng-image-update.noarch

BTW, no more imgbased.log files found.

> Am 02.07.2018 um 20:57 schrieb Yuval Turgeman :
> 
> From your log: 
> 
> AssertionError: Path is already a volume: /var/crash
> 
> Basically, it means that you already have an LV for /var/crash but it's not 
> mounted for some reason, so either mount it (if the data good) or remove it 
> and then reinstall the image-update rpm.  Before that, check that you dont 
> have any other LVs in that same state - or you can post the output for lvs... 
> btw, do you have any more imgbased.log files laying around ?
> 
> You can find more details about this here:
> 
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade
>  
> 
> 
> On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener  > wrote:
> Hi, 
> 
> i attached my /tmp/imgbased.log
> 
> Sheers
> 
> Oliver
> 
> 
> 
>> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman > >:
>> 
>> Looks like the upgrade script failed - can you please attach 
>> /var/log/imgbased.log or /tmp/imgbased.log ?
>> 
>> Thanks,
>> Yuval.
>> 
>> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola > > wrote:
>> Yuval, can you please have a look?
>> 
>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener > >:
>> Yes, here is the same.
>> 
>> It seams the bootloader isn’t configured right ?
>>  
>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>> 
>> [root@ovn-monster ~]# nodectl info
>> layers: 
>>   ovirt-node-ng-4.2.4-0.20180626.0: 
>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>   ovirt-node-ng-4.2.3.1-0.20180530.0: 
>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>   ovirt-node-ng-4.2.3-0.20180524.0: 
>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>   ovirt-node-ng-4.2.1.1-0.20180223.0: 
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> bootloader: 
>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>   entries: 
>> ovirt-node-ng-4.2.3-0.20180524.0+1: 
>>   index: 0
>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>   kernel: 
>> /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>   args: "ro crashkernel=auto rd.lvm.lv 
>> =onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 
>> rd.lvm.lv =onn_ovn-monster/swap 
>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 
>> img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"
>>   initrd: 
>> /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1: 
>>   index: 1
>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>   kernel: 
>> /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>   args: "ro crashkernel=auto rd.lvm.lv 
>> =onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 
>> rd.lvm.lv =onn_ovn-monster/swap 
>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 
>> img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>   initrd: 
>> /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>> [root@ovn-monster ~]# uptime
>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>> 
>>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen >> >:
>>> 
>>> Hello,
>>> 
>>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node 
>>> platform and it doesn't appear the updates worked.
>>> 
>>> 
>>> [root@node6-g8-h4 ~]# yum update
>>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>>   : package_upload, product-id, search-disabled-repos, 
>>> subscription-
>>>   : manager
>>> This system is not registered with an entitlement server. You can use 
>>> subscription-manager to register.
>>> Loading mirror speeds from cached hostfile
>>>  * ovirt-4.2-epel: linux.mirrors.es.net 
>>> Resolving Dependencies
>>> --> Running transaction check
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be 
>>> updated
>>> ---> Package 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Btw, removing /var/crash was directed to Oliver - you have different
problems


On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen  wrote:

> Yes, it shows 8g on the VG
>
> I removed the LV for /var/crash, then installed again, and it is still
> failing on the step:
>
>
> 2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate',
> '--thin', '--virtualsize', u'53750005760B', '--name',
> 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],)
> {'close_fds': True, 'stderr': -2}
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception!   Cannot create
> new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached
> threshold.
>
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.ZYOjC'],) {}
>
>
> Thanks
>
> Matt
>
>
>
>
>
> On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
>
> Not in front of my laptop so it's a little hard to read but does it say 8g
> free on the vg ?
>
> On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:
>
>> This error adds some clarity.
>>
>> That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>>
>> How do you suggest I proceed?
>>
>> Thank you for your help,
>>
>> Matt
>>
>>
>> [root@node6-g8-h4 ~]# lvs
>>
>>   LV   VG  Attr
>> LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>>   home onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 4.79
>>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>>
>>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
>> 6.95
>>   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63
>> 50.34
>>   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>>
>>   tmp  onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 5.04
>>   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool00
>> 5.86
>>   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>>
>>   var_local_images onn_node1-g8-h4 Vwi-aotz--
>> 1.10t pool00
>> 89.72
>>   var_log  onn_node1-g8-h4 Vwi-aotz--
>> 8.00g pool00
>> 6.84
>>   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>> 2.00g pool00
>> 6.16
>> [root@node6-g8-h4 ~]# vgs
>>   VG  #PV #LV #SN Attr   VSize  VFree
>>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>>
>>
>> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//
>> ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
>> 20180626.0.el7.squashfs.img'
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
>> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
>> True, 'stderr': -2}
>> 2018-06-29 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Are you mounted with discard ? perhaps fstrim ?

On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen  wrote:

> Yes, it shows 8g on the VG
>
> I removed the LV for /var/crash, then installed again, and it is still
> failing on the step:
>
>
> 2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate',
> '--thin', '--virtualsize', u'53750005760B', '--name',
> 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],)
> {'close_fds': True, 'stderr': -2}
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception!   Cannot create
> new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached
> threshold.
>
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.ZYOjC'],) {}
>
>
> Thanks
>
> Matt
>
>
>
>
>
> On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
>
> Not in front of my laptop so it's a little hard to read but does it say 8g
> free on the vg ?
>
> On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:
>
>> This error adds some clarity.
>>
>> That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>>
>> How do you suggest I proceed?
>>
>> Thank you for your help,
>>
>> Matt
>>
>>
>> [root@node6-g8-h4 ~]# lvs
>>
>>   LV   VG  Attr
>> LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>>   home onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 4.79
>>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>>
>>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
>> 6.95
>>   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63
>> 50.34
>>   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>>
>>   tmp  onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 5.04
>>   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool00
>> 5.86
>>   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>>
>>   var_local_images onn_node1-g8-h4 Vwi-aotz--
>> 1.10t pool00
>> 89.72
>>   var_log  onn_node1-g8-h4 Vwi-aotz--
>> 8.00g pool00
>> 6.84
>>   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>> 2.00g pool00
>> 6.16
>> [root@node6-g8-h4 ~]# vgs
>>   VG  #PV #LV #SN Attr   VSize  VFree
>>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>>
>>
>> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//
>> ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
>> 20180626.0.el7.squashfs.img'
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
>> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
>> True, 'stderr': -2}
>> 2018-06-29 14:19:31,177 [DEBUG] (MainThread) 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Matt Simonsen

Yes, it shows 8g on the VG

I removed the LV for /var/crash, then installed again, and it is still 
failing on the step:



2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate', 
'--thin', '--virtualsize', u'53750005760B', '--name', 
'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) 
{'close_fds': True, 'stderr': -2}
2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception!   Cannot create 
new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached 
threshold.


2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount', 
'-l', u'/tmp/mnt.ZYOjC'],) {}



Thanks

Matt





On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
Not in front of my laptop so it's a little hard to read but does it 
say 8g free on the vg ?


On Mon, Jul 2, 2018, 20:00 Matt Simonsen > wrote:


This error adds some clarity.

That said, I'm a bit unsure how the space can be the issue given I
have several hundred GB of storage in the thin pool that's unused...

How do you suggest I proceed?

Thank you for your help,

Matt



[root@node6-g8-h4 ~]# lvs

  LV   VG Attr   LSize   Pool
Origin Data%  Meta%  Move Log Cpy%Sync
Convert
  home onn_node1-g8-h4
Vwi-aotz--   1.00g pool00 4.79
  ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
<50.06g pool00 root
  ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
<50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
  ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
<50.06g pool00
  ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
<50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
  pool00   onn_node1-g8-h4 twi-aotz--
<1.30t   76.63 50.34
  root onn_node1-g8-h4 Vwi---tz--
<50.06g pool00
  tmp  onn_node1-g8-h4
Vwi-aotz--   1.00g pool00 5.04
  var  onn_node1-g8-h4 Vwi-aotz-- 
15.00g pool00 5.86
  var_crash    onn_node1-g8-h4 Vwi---tz-- 
10.00g pool00
  var_local_images onn_node1-g8-h4
Vwi-aotz--   1.10t pool00 89.72
  var_log  onn_node1-g8-h4
Vwi-aotz--   8.00g pool00 6.84
  var_log_audit    onn_node1-g8-h4
Vwi-aotz--   2.00g pool00 6.16
[root@node6-g8-h4 ~]# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g


2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:

Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
command='update', debug=True, experimental=False,
format='liveimg', stream='Image')
2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp',
'-d', '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary:
(['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {}
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
'/tmp/mnt.1OhaU/LiveOS/rootfs.img'
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp',
'-d', '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary:
(['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img',
u'/tmp/mnt.153do'],) {}
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],)
{'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr:

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
>From your log:

AssertionError: Path is already a volume: /var/crash

Basically, it means that you already have an LV for /var/crash but it's not
mounted for some reason, so either mount it (if the data good) or remove it
and then reinstall the image-update rpm.  Before that, check that you dont
have any other LVs in that same state - or you can post the output for
lvs... btw, do you have any more imgbased.log files laying around ?

You can find more details about this here:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade

On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Hi,
>
> i attached my /tmp/imgbased.log
>
> Sheers
>
> Oliver
>
>
>
> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman :
>
> Looks like the upgrade script failed - can you please attach
> /var/log/imgbased.log or /tmp/imgbased.log ?
>
> Thanks,
> Yuval.
>
> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
> wrote:
>
>> Yuval, can you please have a look?
>>
>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>>
>>> Yes, here is the same.
>>>
>>> It seams the bootloader isn’t configured right ?
>>>
>>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>>
>>> [root@ovn-monster ~]# nodectl info
>>> layers:
>>>   ovirt-node-ng-4.2.4-0.20180626.0:
>>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>>   ovirt-node-ng-4.2.3.1-0.20180530.0:
>>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>>   ovirt-node-ng-4.2.3-0.20180524.0:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   ovirt-node-ng-4.2.1.1-0.20180223.0:
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> bootloader:
>>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   entries:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1:
>>>   index: 0
>>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>>   kernel: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>>   args: "ro crashkernel=auto 
>>> rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> rd.lvm.lv=onn_ovn-monster/swap 
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587
>>> rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3
>>> -0.20180524.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>>>   index: 1
>>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir
>>> t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
>>> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>>> [root@ovn-monster ~]# uptime
>>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>>>
>>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>>>
>>> Hello,
>>>
>>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
>>> platform and it doesn't appear the updates worked.
>>>
>>>
>>> [root@node6-g8-h4 ~]# yum update
>>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>>   : package_upload, product-id, search-disabled-repos,
>>> subscription-
>>>   : manager
>>> This system is not registered with an entitlement server. You can use
>>> subscription-manager to register.
>>> Loading mirror speeds from cached hostfile
>>>  * ovirt-4.2-epel: linux.mirrors.es.net
>>> Resolving Dependencies
>>> --> Running transaction check
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
>>> updated
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
>>> obsoleting
>>> ---> Package ovirt-node-ng-image-update-placeholder.noarch
>>> 0:4.2.3.1-1.el7 will be obsoleted
>>> --> Finished Dependency Resolution
>>>
>>> Dependencies Resolved
>>>
>>> 
>>> =
>>>  Package  Arch
>>> Version Repository   Size
>>> 
>>> =
>>> Installing:
>>>  ovirt-node-ng-image-update   noarch
>>> 4.2.4-1.el7 ovirt-4.2   647 M
>>>  replacing  ovirt-node-ng-image-update-placeholder.noarch
>>> 4.2.3.1-1.el7
>>>
>>> Transaction Summary
>>> 
>>> 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Sandro Bonazzola
2018-07-02 19:55 GMT+02:00 Yuval Turgeman :

> Not in front of my laptop so it's a little hard to read but does it say 8g
> free on the vg ?
>

Yes, it says 8G in Vfree column



>
> On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:
>
>> This error adds some clarity.
>>
>> That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>>
>> How do you suggest I proceed?
>>
>> Thank you for your help,
>>
>> Matt
>>
>>
>> [root@node6-g8-h4 ~]# lvs
>>
>>   LV   VG  Attr
>> LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>>   home onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 4.79
>>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>>
>>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
>> 6.95
>>   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63
>> 50.34
>>   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>>
>>   tmp  onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 5.04
>>   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool00
>> 5.86
>>   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>>
>>   var_local_images onn_node1-g8-h4 Vwi-aotz--
>> 1.10t pool00
>> 89.72
>>   var_log  onn_node1-g8-h4 Vwi-aotz--
>> 8.00g pool00
>> 6.84
>>   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>> 2.00g pool00
>> 6.16
>> [root@node6-g8-h4 ~]# vgs
>>   VG  #PV #LV #SN Attr   VSize  VFree
>>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>>
>>
>> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//
>> ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
>> 20180626.0.el7.squashfs.img'
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
>> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
>> True, 'stderr': -2}
>> 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
>> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr:
>> ovirt-node-ng-4.2.4-0.20180626.0
>> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/'
>> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt',
>> '--noheadings', '-o', 'SOURCE', '/'],) {}
>> 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt',
>> '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned:
>> /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1
>> 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found
>> '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'
>> 2018-06-29 14:19:31,204 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Not in front of my laptop so it's a little hard to read but does it say 8g
free on the vg ?

On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:

> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue given I have
> several hundred GB of storage in the thin pool that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV   VG  Attr   LSize
> Pool   Origin Data%  Meta%  Move Log Cpy%Sync
> Convert
>   home onn_node1-g8-h4 Vwi-aotz--   1.00g
> pool00
> 4.79
>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g
> pool00
> root
>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz-- <50.06g
> pool00
> ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k <50.06g
> pool00
>
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g
> pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
> 6.95
>   pool00   onn_node1-g8-h4 twi-aotz--
> <1.30t   76.63
> 50.34
>   root onn_node1-g8-h4 Vwi---tz-- <50.06g
> pool00
>
>   tmp  onn_node1-g8-h4 Vwi-aotz--   1.00g
> pool00
> 5.04
>   var  onn_node1-g8-h4 Vwi-aotz--  15.00g
> pool00
> 5.86
>   var_crashonn_node1-g8-h4 Vwi---tz--  10.00g
> pool00
>
>   var_local_images onn_node1-g8-h4 Vwi-aotz--   1.10t
> pool00
> 89.72
>   var_log  onn_node1-g8-h4 Vwi-aotz--   8.00g
> pool00
> 6.84
>   var_log_auditonn_node1-g8-h4 Vwi-aotz--   2.00g
> pool00
> 6.16
> [root@node6-g8-h4 ~]# vgs
>   VG  #PV #LV #SN Attr   VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> command='update', debug=True, experimental=False, format='liveimg',
> stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
> '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> u'/tmp/mnt.1OhaU'],) {}
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
> '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
> True, 'stderr': -2}
> 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr:
> ovirt-node-ng-4.2.4-0.20180626.0
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/'
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt',
> '--noheadings', '-o', 'SOURCE', '/'],) {}
> 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt',
> '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned:
> /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1
> 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found
> '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'
> 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs',
> '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name',
> 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Oliver Riesener
Hi, i attached my /tmp/imgbased.logSheersOliver

imgbased.log.gz
Description: GNU Zip compressed data
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman :Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?Thanks,Yuval.On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola  wrote:Yuval, can you please have a look?2018-06-30 7:48 GMT+02:00 Oliver Riesener :Yes, here is the same.It seams the bootloader isn’t configured right ? I did the Upgrade and reboot to 4.2.4 from UI and got:[root@ovn-monster ~]# nodectl infolayers:   ovirt-node-ng-4.2.4-0.20180626.0:     ovirt-node-ng-4.2.4-0.20180626.0+1  ovirt-node-ng-4.2.3.1-0.20180530.0:     ovirt-node-ng-4.2.3.1-0.20180530.0+1  ovirt-node-ng-4.2.3-0.20180524.0:     ovirt-node-ng-4.2.3-0.20180524.0+1  ovirt-node-ng-4.2.1.1-0.20180223.0:     ovirt-node-ng-4.2.1.1-0.20180223.0+1bootloader:   default: ovirt-node-ng-4.2.3-0.20180524.0+1  entries:     ovirt-node-ng-4.2.3-0.20180524.0+1:       index: 0      title: ovirt-node-ng-4.2.3-0.20180524.0      kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64      args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"      initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img      root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1    ovirt-node-ng-4.2.1.1-0.20180223.0+1:       index: 1      title: ovirt-node-ng-4.2.1.1-0.20180223.0      kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64      args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"      initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img      root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1[root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95Am 29.06.2018 um 23:53 schrieb Matt Simonsen :Hello,I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.[root@node6-g8-h4 ~]# yum updateLoaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,  : package_upload, product-id, search-disabled-repos, subscription-  : managerThis system is not registered with an entitlement server. You can use subscription-manager to register.Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.netResolving Dependencies--> Running transaction check---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted--> Finished Dependency ResolutionDependencies Resolved= Package  Arch Version Repository   Size=Installing: ovirt-node-ng-image-update   noarch 4.2.4-1.el7 ovirt-4.2   647 M replacing  ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7Transaction Summary=Install  1 PackageTotal download size: 647 MIs this ok [y/d/N]: yDownloading packages:warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEYPublic key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installedovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB  00:02:07Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2Importing GPG key 0xFE590CB7: Userid : "oVirt " Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package    : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From   : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2Is this ok [y/N]: yRunning transaction checkRunning transaction testTransaction test succeededRunning transaction  Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1Non-fatal POSTIN scriptlet failure in rpm package 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Matt Simonsen

This error adds some clarity.

That said, I'm a bit unsure how the space can be the issue given I have 
several hundred GB of storage in the thin pool that's unused...


How do you suggest I proceed?

Thank you for your help,

Matt



[root@node6-g8-h4 ~]# lvs

  LV   VG  Attr LSize   
Pool   Origin Data%  Meta% Move Log Cpy%Sync 
Convert
  home onn_node1-g8-h4 Vwi-aotz--   
1.00g pool00 4.79
  ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k 
<50.06g pool00 root
  ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz-- 
<50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
  ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k 
<50.06g pool00
  ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- 
<50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
  pool00   onn_node1-g8-h4 twi-aotz-- 
<1.30t   76.63 50.34
  root onn_node1-g8-h4 Vwi---tz-- 
<50.06g pool00
  tmp  onn_node1-g8-h4 Vwi-aotz--   
1.00g pool00 5.04
  var  onn_node1-g8-h4 Vwi-aotz-- 
15.00g pool00 5.86
  var_crash    onn_node1-g8-h4 Vwi---tz-- 
10.00g pool00
  var_local_images onn_node1-g8-h4 Vwi-aotz--   
1.10t pool00 89.72
  var_log  onn_node1-g8-h4 Vwi-aotz--   
8.00g pool00 6.84
  var_log_audit    onn_node1-g8-h4 Vwi-aotz--   
2.00g pool00 6.16

[root@node6-g8-h4 ~]# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g


2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: 
Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', 
command='update', debug=True, experimental=False, format='liveimg', 
stream='Image')
2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image 
'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', 
'-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', 
'--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}

2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', 
'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', 
u'/tmp/mnt.1OhaU'],) {}
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', 
'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', 
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}

2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at 
'/tmp/mnt.1OhaU/LiveOS/rootfs.img'
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', 
'-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', 
'--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}

2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', 
u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', 
u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': 
True, 'stderr': -2}

2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: 
ovirt-node-ng-4.2.4-0.20180626.0

2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/'
2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: 
(['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {}
2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', 
'--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: 
/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1
2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found 
'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'
2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', 
'--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', 
u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) 
{'stderr': }
2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', 
'--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', 
u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) 
{'close_fds': True, 'stderr': 0x7f56b787eed0>}
2018-06-29 14:19:31,283 [DEBUG] 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Sandro Bonazzola
2018-07-02 13:58 GMT+02:00 Yuval Turgeman :

> Looks like the upgrade script failed - can you please attach
> /var/log/imgbased.log or /tmp/imgbased.log ?
>

Just re-tested locally in a VM 4.2.3.1 -> 4.2.4 and it worked perfectly.


# nodectl info
layers:
  ovirt-node-ng-4.2.4-0.20180626.0:
ovirt-node-ng-4.2.4-0.20180626.0+1
  ovirt-node-ng-4.2.3.1-0.20180530.0:
ovirt-node-ng-4.2.3.1-0.20180530.0+1
bootloader:
  default: ovirt-node-ng-4.2.4-0.20180626.0+1
  entries:
ovirt-node-ng-4.2.3.1-0.20180530.0+1:
  index: 1
  title: ovirt-node-ng-4.2.3.1-0.20180530.0
  kernel:
/boot/ovirt-node-ng-4.2.3.1-0.20180530.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
  args: "ro crashkernel=auto
rd.lvm.lv=onn_host/ovirt-node-ng-4.2.3.1-0.20180530.0+1
rd.lvm.lv=onn_host/swap rhgb quiet LANG=it_IT.UTF-8
img.bootid=ovirt-node-ng-4.2.3.1-0.20180530.0+1"
  initrd:
/boot/ovirt-node-ng-4.2.3.1-0.20180530.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
  root: /dev/onn_host/ovirt-node-ng-4.2.3.1-0.20180530.0+1
ovirt-node-ng-4.2.4-0.20180626.0+1:
  index: 0
  title: ovirt-node-ng-4.2.4-0.20180626.0
  kernel:
/boot/ovirt-node-ng-4.2.4-0.20180626.0+1/vmlinuz-3.10.0-862.3.3.el7.x86_64
  args: "ro crashkernel=auto rd.lvm.lv=onn_host/swap
rd.lvm.lv=onn_host/ovirt-node-ng-4.2.4-0.20180626.0+1
rhgb quiet LANG=it_IT.UTF-8 img.bootid=ovirt-node-ng-4.2.4-0.20180626.0+1"
  initrd:
/boot/ovirt-node-ng-4.2.4-0.20180626.0+1/initramfs-3.10.0-862.3.3.el7.x86_64.img
  root: /dev/onn_host/ovirt-node-ng-4.2.4-0.20180626.0+1
current_layer: ovirt-node-ng-4.2.4-0.20180626.0+1



>
> Thanks,
> Yuval.
>
> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
> wrote:
>
>> Yuval, can you please have a look?
>>
>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>>
>>> Yes, here is the same.
>>>
>>> It seams the bootloader isn’t configured right ?
>>>
>>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>>
>>> [root@ovn-monster ~]# nodectl info
>>> layers:
>>>   ovirt-node-ng-4.2.4-0.20180626.0:
>>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>>   ovirt-node-ng-4.2.3.1-0.20180530.0:
>>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>>   ovirt-node-ng-4.2.3-0.20180524.0:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   ovirt-node-ng-4.2.1.1-0.20180223.0:
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> bootloader:
>>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   entries:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1:
>>>   index: 0
>>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>>   kernel: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>>   args: "ro crashkernel=auto 
>>> rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> rd.lvm.lv=onn_ovn-monster/swap 
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587
>>> rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3
>>> -0.20180524.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>>>   index: 1
>>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir
>>> t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
>>> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>>> [root@ovn-monster ~]# uptime
>>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>>>
>>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>>>
>>> Hello,
>>>
>>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
>>> platform and it doesn't appear the updates worked.
>>>
>>>
>>> [root@node6-g8-h4 ~]# yum update
>>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>>   : package_upload, product-id, search-disabled-repos,
>>> subscription-
>>>   : manager
>>> This system is not registered with an entitlement server. You can use
>>> subscription-manager to register.
>>> Loading mirror speeds from cached hostfile
>>>  * ovirt-4.2-epel: linux.mirrors.es.net
>>> Resolving Dependencies
>>> --> Running transaction check
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
>>> updated
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
>>> obsoleting
>>> ---> Package ovirt-node-ng-image-update-placeholder.noarch
>>> 0:4.2.3.1-1.el7 will be obsoleted
>>> --> Finished Dependency Resolution
>>>
>>> Dependencies Resolved
>>>
>>> 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Looks like the upgrade script failed - can you please attach
/var/log/imgbased.log or /tmp/imgbased.log ?

Thanks,
Yuval.

On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
wrote:

> Yuval, can you please have a look?
>
> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>
>> Yes, here is the same.
>>
>> It seams the bootloader isn’t configured right ?
>>
>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>
>> [root@ovn-monster ~]# nodectl info
>> layers:
>>   ovirt-node-ng-4.2.4-0.20180626.0:
>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>   ovirt-node-ng-4.2.3.1-0.20180530.0:
>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>   ovirt-node-ng-4.2.3-0.20180524.0:
>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>   ovirt-node-ng-4.2.1.1-0.20180223.0:
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> bootloader:
>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>   entries:
>> ovirt-node-ng-4.2.3-0.20180524.0+1:
>>   index: 0
>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>   kernel: /boot/ovirt-node-ng-4.2.3-0.20
>> 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>   args: "ro crashkernel=auto 
>> rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>> rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587
>> rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3
>> -0.20180524.0+1"
>>   initrd: /boot/ovirt-node-ng-4.2.3-0.20
>> 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>>   index: 1
>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.
>> 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir
>> t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
>> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.
>> 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>> [root@ovn-monster ~]# uptime
>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>>
>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>>
>> Hello,
>>
>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
>> platform and it doesn't appear the updates worked.
>>
>>
>> [root@node6-g8-h4 ~]# yum update
>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>   : package_upload, product-id, search-disabled-repos,
>> subscription-
>>   : manager
>> This system is not registered with an entitlement server. You can use
>> subscription-manager to register.
>> Loading mirror speeds from cached hostfile
>>  * ovirt-4.2-epel: linux.mirrors.es.net
>> Resolving Dependencies
>> --> Running transaction check
>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
>> updated
>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
>> obsoleting
>> ---> Package ovirt-node-ng-image-update-placeholder.noarch
>> 0:4.2.3.1-1.el7 will be obsoleted
>> --> Finished Dependency Resolution
>>
>> Dependencies Resolved
>>
>> 
>> =
>>  Package  Arch
>> Version Repository   Size
>> 
>> =
>> Installing:
>>  ovirt-node-ng-image-update   noarch
>> 4.2.4-1.el7 ovirt-4.2   647 M
>>  replacing  ovirt-node-ng-image-update-placeholder.noarch
>> 4.2.3.1-1.el7
>>
>> Transaction Summary
>> 
>> =
>> Install  1 Package
>>
>> Total download size: 647 M
>> Is this ok [y/d/N]: y
>> Downloading packages:
>> warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima
>> ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID
>> fe590cb7: NOKEY
>> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not
>> installed
>> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB  00:02:07
>> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
>> Importing GPG key 0xFE590CB7:
>>  Userid : "oVirt "
>>  Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7
>>  Package: ovirt-release42-4.2.3.1-1.el7.noarch (installed)
>>  From   : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
>> Is this ok [y/N]: y
>> Running transaction check
>> Running transaction test
>> Transaction test succeeded
>> Running transaction
>>   Installing : 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Sandro Bonazzola
Yuval, can you please have a look?

2018-06-30 7:48 GMT+02:00 Oliver Riesener :

> Yes, here is the same.
>
> It seams the bootloader isn’t configured right ?
>
> I did the Upgrade and reboot to 4.2.4 from UI and got:
>
> [root@ovn-monster ~]# nodectl info
> layers:
>   ovirt-node-ng-4.2.4-0.20180626.0:
> ovirt-node-ng-4.2.4-0.20180626.0+1
>   ovirt-node-ng-4.2.3.1-0.20180530.0:
> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>   ovirt-node-ng-4.2.3-0.20180524.0:
> ovirt-node-ng-4.2.3-0.20180524.0+1
>   ovirt-node-ng-4.2.1.1-0.20180223.0:
> ovirt-node-ng-4.2.1.1-0.20180223.0+1
> bootloader:
>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>   entries:
> ovirt-node-ng-4.2.3-0.20180524.0+1:
>   index: 0
>   title: ovirt-node-ng-4.2.3-0.20180524.0
>   kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-
> 862.3.2.el7.x86_64
>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/
> ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap
> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"
>   initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-
> 862.3.2.el7.x86_64.img
>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>   index: 1
>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-
> 693.17.1.el7.x86_64
>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/
> ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-
> 693.17.1.el7.x86_64.img
>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
> [root@ovn-monster ~]# uptime
>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>
> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>
> Hello,
>
> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
> platform and it doesn't appear the updates worked.
>
>
> [root@node6-g8-h4 ~]# yum update
> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>   : package_upload, product-id, search-disabled-repos,
> subscription-
>   : manager
> This system is not registered with an entitlement server. You can use
> subscription-manager to register.
> Loading mirror speeds from cached hostfile
>  * ovirt-4.2-epel: linux.mirrors.es.net
> Resolving Dependencies
> --> Running transaction check
> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
> updated
> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
> obsoleting
> ---> Package ovirt-node-ng-image-update-placeholder.noarch
> 0:4.2.3.1-1.el7 will be obsoleted
> --> Finished Dependency Resolution
>
> Dependencies Resolved
>
> 
> =
>  Package  Arch
> Version Repository   Size
> 
> =
> Installing:
>  ovirt-node-ng-image-update   noarch
> 4.2.4-1.el7 ovirt-4.2   647 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch
> 4.2.3.1-1.el7
>
> Transaction Summary
> 
> =
> Install  1 Package
>
> Total download size: 647 M
> Is this ok [y/d/N]: y
> Downloading packages:
> warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-
> image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID
> fe590cb7: NOKEY
> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not
> installed
> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB  00:02:07
> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
> Importing GPG key 0xFE590CB7:
>  Userid : "oVirt "
>  Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7
>  Package: ovirt-release42-4.2.3.1-1.el7.noarch (installed)
>  From   : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
> Is this ok [y/N]: y
> Running transaction check
> Running transaction test
> Transaction test succeeded
> Running transaction
>   Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3
> warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet
> failed, exit status 1
> Non-fatal POSTIN scriptlet failure in rpm package
> ovirt-node-ng-image-update-4.2.4-1.el7.noarch
>   Erasing: ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-06-29 Thread Oliver Riesener
Yes, here is the same.

It seams the bootloader isn’t configured right ?
 
I did the Upgrade and reboot to 4.2.4 from UI and got:

[root@ovn-monster ~]# nodectl info
layers: 
  ovirt-node-ng-4.2.4-0.20180626.0: 
ovirt-node-ng-4.2.4-0.20180626.0+1
  ovirt-node-ng-4.2.3.1-0.20180530.0: 
ovirt-node-ng-4.2.3.1-0.20180530.0+1
  ovirt-node-ng-4.2.3-0.20180524.0: 
ovirt-node-ng-4.2.3-0.20180524.0+1
  ovirt-node-ng-4.2.1.1-0.20180223.0: 
ovirt-node-ng-4.2.1.1-0.20180223.0+1
bootloader: 
  default: ovirt-node-ng-4.2.3-0.20180524.0+1
  entries: 
ovirt-node-ng-4.2.3-0.20180524.0+1: 
  index: 0
  title: ovirt-node-ng-4.2.3-0.20180524.0
  kernel: 
/boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
  args: "ro crashkernel=auto 
rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 
rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 
rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"
  initrd: 
/boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
  root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
ovirt-node-ng-4.2.1.1-0.20180223.0+1: 
  index: 1
  title: ovirt-node-ng-4.2.1.1-0.20180223.0
  kernel: 
/boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
  args: "ro crashkernel=auto 
rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 
rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 
rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
  initrd: 
/boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
  root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
[root@ovn-monster ~]# uptime
 07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95

> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
> 
> Hello,
> 
> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node 
> platform and it doesn't appear the updates worked.
> 
> 
> [root@node6-g8-h4 ~]# yum update
> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>   : package_upload, product-id, search-disabled-repos, 
> subscription-
>   : manager
> This system is not registered with an entitlement server. You can use 
> subscription-manager to register.
> Loading mirror speeds from cached hostfile
>  * ovirt-4.2-epel: linux.mirrors.es.net
> Resolving Dependencies
> --> Running transaction check
> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated
> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be 
> obsoleting
> ---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 
> will be obsoleted
> --> Finished Dependency Resolution
> 
> Dependencies Resolved
> 
> =
>  Package  Arch Version 
> Repository   Size
> =
> Installing:
>  ovirt-node-ng-image-update   noarch 4.2.4-1.el7 
> ovirt-4.2   647 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7
> 
> Transaction Summary
> =
> Install  1 Package
> 
> Total download size: 647 M
> Is this ok [y/d/N]: y
> Downloading packages:
> warning: 
> /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm:
>  Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEY
> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not 
> installed
> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB  00:02:07
> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
> Importing GPG key 0xFE590CB7:
>  Userid : "oVirt "
>  Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7
>  Package: ovirt-release42-4.2.3.1-1.el7.noarch (installed)
>  From   : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
> Is this ok [y/N]: y
> Running transaction check
> Running transaction test
> Transaction test succeeded
> Running transaction
>   Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3
> warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet 
> failed, exit status 1
> Non-fatal POSTIN scriptlet failure in rpm package 
> ovirt-node-ng-image-update-4.2.4-1.el7.noarch
>   Erasing: ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch 2/3
>   Cleanup: ovirt-node-ng-image-update-4.2.3.1-1.el7.noarch 3/3
> warning: file 
>