Hi Oliver,
Sorry we couldn't get this to upgrade, but removing the base layers kinda
killed us - however, we already have some ideas on how to improve imgbased
to make it more friendly :)
Thanks for the update !
Yuval.
On Thu, Jul 5, 2018 at 3:52 PM, Oliver Riesener <
Hi Yuval,
as you can see in my last attachment, after lv meta restore i was unable to
modify LV's in pool00.
Thin pool has queued transactions got 23 expect 16 or so.
I reboot and try repairing from Centos 7 USB Stick and can’t access / remove LV
because they
has Read LOCK and then Write LOCK
Many thanks to Yuval.
After moving the discussion to #ovirt, I tried "fstrim -a" and this
allowed the upgrade to complete successfully.
Matt
On 07/03/2018 12:19 PM, Yuval Turgeman wrote:
Hi Matt,
I would try to run `fstrim -a` (man fstrim) and see if it frees
anything from the
OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1
still exists without its base - try this:
1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
2. nodectl info
On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:
> I did it,
Hi Oliver,
I would try the following, but please notice it is *very* dangerous, so a
backup is probably a good idea (man vgcfgrestore)...
1. vgcfgrestore --list onn_ovn-monster
2. search for a .vg file that was created before deleting those 2 lvs (
ovirt-node-ng-4.2.3-0.20180524.0 and
Hi Oliver,
The KeyError happens because there are no bases for the layers. For each
LV that ends with a +1, there should be a base read-only LV without +1. So
for 3 ovirt-node-ng images, you're supposed to have 6 layers. This is the
reason nodectl info fails, and the upgrade will fail also.
Hi Matt,
I would try to run `fstrim -a` (man fstrim) and see if it frees anything
from the thinpool. If you do decide to run this, please send the output
for lvs again.
Also, are you on #ovirt ?
Thanks,
Yuval.
On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen wrote:
> Thank you again for the
Thank you again for the assistance with this issue.
Below is the result of the command below.
In the future I am considering using different Logical RAID Volumes to
get different devices (sda, sdb, etc) for the oVirt Node image & storage
filesystem to simplify. However I'd like to understand
Oliver, can you share the output from lvs ?
On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:
> Hi Yuval,
>
> * reinstallation failed, because LV already exists.
> ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k
> <252,38g pool00
Not sure this is the problem, autoextend should be enabled for the
thinpool, `lvs -o +profile` should show imgbased-pool (defined at
/etc/lvm/profile/imgbased-pool.profile)
On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David wrote:
> On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen wrote:
> >
> >
On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen wrote:
>
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue given I have
> several hundred GB of storage in the thin pool that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
>
Hi Yuval,* reinstallation failed, because LV already exists. ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k <252,38g pool00 0,85 ovirt-node-ng-4.2.4-0.20180626.0+1 onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
On 07/02/2018 12:55 PM, Yuval Turgeman wrote:
Are you mounted with discard ? perhaps fstrim ?
I believe that I have all the default options, and I have one extra
partition for images.
#
# /etc/fstab
# Created by anaconda on Sat Oct 31 18:04:29 2015
#
# Accessible filesystems, by
Hi Yuval,
yes you are right, there was a unused and deactivated var_crash LV.
* I activated and mount it to /var/crash via /etc/fstab.
* /var/crash was empty, and LV has already ext4 fs.
var_crashonn_ovn-monster Vwi-aotz-- 10,00g
pool00
Btw, removing /var/crash was directed to Oliver - you have different
problems
On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen wrote:
> Yes, it shows 8g on the VG
>
> I removed the LV for /var/crash, then installed again, and it is still
> failing on the step:
>
>
> 2018-07-02 12:21:10,015
Are you mounted with discard ? perhaps fstrim ?
On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen wrote:
> Yes, it shows 8g on the VG
>
> I removed the LV for /var/crash, then installed again, and it is still
> failing on the step:
>
>
> 2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling:
Yes, it shows 8g on the VG
I removed the LV for /var/crash, then installed again, and it is still
failing on the step:
2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate',
'--thin', '--virtualsize', u'53750005760B', '--name',
'ovirt-node-ng-4.2.4-0.20180626.0',
>From your log:
AssertionError: Path is already a volume: /var/crash
Basically, it means that you already have an LV for /var/crash but it's not
mounted for some reason, so either mount it (if the data good) or remove it
and then reinstall the image-update rpm. Before that, check that you dont
2018-07-02 19:55 GMT+02:00 Yuval Turgeman :
> Not in front of my laptop so it's a little hard to read but does it say 8g
> free on the vg ?
>
Yes, it says 8G in Vfree column
>
> On Mon, Jul 2, 2018, 20:00 Matt Simonsen wrote:
>
>> This error adds some clarity.
>>
>> That said, I'm a bit
Not in front of my laptop so it's a little hard to read but does it say 8g
free on the vg ?
On Mon, Jul 2, 2018, 20:00 Matt Simonsen wrote:
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue given I have
> several hundred GB of storage in the thin
Hi, i attached my /tmp/imgbased.logSheersOliver
imgbased.log.gz
Description: GNU Zip compressed data
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman :Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?Thanks,Yuval.On Mon, Jul
This error adds some clarity.
That said, I'm a bit unsure how the space can be the issue given I have
several hundred GB of storage in the thin pool that's unused...
How do you suggest I proceed?
Thank you for your help,
Matt
[root@node6-g8-h4 ~]# lvs
LV
2018-07-02 13:58 GMT+02:00 Yuval Turgeman :
> Looks like the upgrade script failed - can you please attach
> /var/log/imgbased.log or /tmp/imgbased.log ?
>
Just re-tested locally in a VM 4.2.3.1 -> 4.2.4 and it worked perfectly.
# nodectl info
layers:
ovirt-node-ng-4.2.4-0.20180626.0:
Looks like the upgrade script failed - can you please attach
/var/log/imgbased.log or /tmp/imgbased.log ?
Thanks,
Yuval.
On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola
wrote:
> Yuval, can you please have a look?
>
> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>
>> Yes, here is the same.
>>
Yuval, can you please have a look?
2018-06-30 7:48 GMT+02:00 Oliver Riesener :
> Yes, here is the same.
>
> It seams the bootloader isn’t configured right ?
>
> I did the Upgrade and reboot to 4.2.4 from UI and got:
>
> [root@ovn-monster ~]# nodectl info
> layers:
>
Yes, here is the same.
It seams the bootloader isn’t configured right ?
I did the Upgrade and reboot to 4.2.4 from UI and got:
[root@ovn-monster ~]# nodectl info
layers:
ovirt-node-ng-4.2.4-0.20180626.0:
ovirt-node-ng-4.2.4-0.20180626.0+1
ovirt-node-ng-4.2.3.1-0.20180530.0:
26 matches
Mail list logo