[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yedidyah Bar David
On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen  wrote:
>
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue given I have 
> several hundred GB of storage in the thin pool that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV   VG  Attr   LSize   
> Pool   Origin Data%  Meta%  Move Log Cpy%Sync 
> Convert
>   home onn_node1-g8-h4 Vwi-aotz--   1.00g 
> pool004.79
>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g 
> pool00 root
>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz-- <50.06g 
> pool00 ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k <50.06g 
> pool00
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g 
> pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>   pool00   onn_node1-g8-h4 twi-aotz--  <1.30t 
>   76.63  50.34

I think your thinpool meta volume is close to full and needs to be enlarged.
This quite likely happened because you extended the thinpool without
extending the meta vol.

Check also 'lvs -a'.

This might be enough, but check the names first:

lvextend -L+200m onn_node1-g8-h4/pool00_tmeta

Best regards,

>   root onn_node1-g8-h4 Vwi---tz-- <50.06g 
> pool00
>   tmp  onn_node1-g8-h4 Vwi-aotz--   1.00g 
> pool005.04
>   var  onn_node1-g8-h4 Vwi-aotz--  15.00g 
> pool005.86
>   var_crashonn_node1-g8-h4 Vwi---tz--  10.00g 
> pool00
>   var_local_images onn_node1-g8-h4 Vwi-aotz--   1.10t 
> pool0089.72
>   var_log  onn_node1-g8-h4 Vwi-aotz--   8.00g 
> pool006.84
>   var_log_auditonn_node1-g8-h4 Vwi-aotz--   2.00g 
> pool006.16
> [root@node6-g8-h4 ~]# vgs
>   VG  #PV #LV #SN Attr   VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: 
> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>  command='update', debug=True, experimental=False, format='liveimg', 
> stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image 
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', 
> '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', 
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', 
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>  u'/tmp/mnt.1OhaU'],) {}
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', 
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>  u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at 
> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', 
> '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', 
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', 
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', 
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': True, 
> 'stderr': -2}
> 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: 
> ovirt-node-ng-4.2.4-0.20180626.0
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/'
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt', 
> '--noheadings', '-o', 'SOURCE', '/'],) {}
> 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', 
> '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,203 [DE

[ovirt-users] Re: Install hosted-engine - Task Get local VM IP failed

2018-07-02 Thread charuraks
Just abort experiment on Virtual Machine and Trying on Physical Machine
There is some issue on VM it's will stuck at get VM IP Local Process, May 
because virtual machine doesn't support bridge driver or anyways
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XEP3NAVYMJ2LRYVJS3GU3UVMA5ZANXSG/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Oliver Riesener
Hi Yuval,* reinstallation failed, because LV already exists.  ovirt-node-ng-4.2.4-0.20180626.0     onn_ovn-monster Vri-a-tz-k <252,38g pool00                                  0,85  ovirt-node-ng-4.2.4-0.20180626.0+1   onn_ovn-monster Vwi-a-tz-- <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85 See attachment imgbased.reinstall.log* I removed them and re-reinstall without luck.I got KeyError: See attachment imgbased.rereinstall.logAlso a new problem with nodectl info[root@ovn-monster tmp]# nodectl infoTraceback (most recent call last):  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main    "__main__", fname, loader, pkg_name)  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code    exec code in run_globals  File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in     CliApplication()  File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication    return cmdmap.command(args)  File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command    return self.commands[command](**kwargs)  File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info    Info(self.imgbased, self.machine).write()  File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in __init__    self._fetch_information()  File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information    self._get_layout()  File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in _get_layout    layout = LayoutParser(self.app.imgbase.layout()).parse()  File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155, in layout    return self.naming.layout()  File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109, in layout    tree = self.tree(lvs)  File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224, in tree    bases[img.base.nvr].layers.append(img)KeyError: 

imgbased.log.reinstall.gz
Description: GNU Zip compressed data


imgbased.log.rereinstall.gz
Description: GNU Zip compressed data
Am 02.07.2018 um 22:22 schrieb Oliver Riesener :Hi Yuval,yes you are right, there was a unused and deactivated var_crash LV.* I activated and mount it to /var/crash via /etc/fstab.* /var/crash was empty, and LV has already ext4 fs.  var_crash                            onn_ovn-monster Vwi-aotz--   10,00g pool00                                    2,86                                   * Now i will try to upgrade again.  * yum reinstall ovirt-node-ng-image-update.noarchBTW, no more imgbased.log files found.Am 02.07.2018 um 20:57 schrieb Yuval Turgeman :From your log: AssertionError: Path is already a volume: /var/crashBasically, it means that you already have an LV for /var/crash but it's not mounted for some reason, so either mount it (if the data good) or remove it and then reinstall the image-update rpm.  Before that, check that you dont have any other LVs in that same state - or you can post the output for lvs... btw, do you have any more imgbased.log files laying around ?You can find more details about this here:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgradeOn Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener  wrote:Hi, i attached my /tmp/imgbased.logSheersOliverAm 02.07.2018 um 13:58 schrieb Yuval Turgeman :Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?Thanks,Yuval.On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola  wrote:Yuval, can you please have a look?2018-06-30 7:48 GMT+02:00 Oliver Riesener :Yes, here is the same.It seams the bootloader isn’t configured right ? I did the Upgrade and reboot to 4.2.4 from UI and got:[root@ovn-monster ~]# nodectl infolayers:   ovirt-node-ng-4.2.4-0.20180626.0:     ovirt-node-ng-4.2.4-0.20180626.0+1  ovirt-node-ng-4.2.3.1-0.20180530.0:     ovirt-node-ng-4.2.3.1-0.20180530.0+1  ovirt-node-ng-4.2.3-0.20180524.0:     ovirt-node-ng-4.2.3-0.20180524.0+1  ovirt-node-ng-4.2.1.1-0.20180223.0:     ovirt-node-ng-4.2.1.1-0.20180223.0+1bootloader:   default: ovirt-node-ng-4.2.3-0.20180524.0+1  entries:     ovirt-node-ng-4.2.3-0.20180524.0+1:       index: 0      title: ovirt-node-ng-4.2.3-0.20180524.0      kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64      args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"      initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img      root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1    ovirt-node-ng-4.2.1.1-0.20180223.0+1:       index: 1      title: ovirt-node-ng-4.2.1.1-0.201802

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Matt Simonsen

On 07/02/2018 12:55 PM, Yuval Turgeman wrote:

Are you mounted with discard ? perhaps fstrim ?





I believe that I have all the default options, and I have one extra 
partition for images.



#
# /etc/fstab
# Created by anaconda on Sat Oct 31 18:04:29 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/onn_node1-g8-h4/ovirt-node-ng-4.2.3.1-0.20180530.0+1 / ext4 
defaults,discard 1 1

UUID=84ca8776-61d6-4b19-9104-99730932b45a /boot ext4    defaults    1 2
/dev/mapper/onn_node1--g8--h4-home /home ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-tmp /tmp ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-var /var ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-var_local_images /var/local/images   
ext4    defaults    1 2

/dev/mapper/onn_node1--g8--h4-var_log /var/log ext4 defaults,discard 1 2
/dev/mapper/onn_node1--g8--h4-var_log_audit /var/log/audit ext4 
defaults,discard 1 2



At this point I don't have a /var/crash mounted (or a LV even).  I 
assume I should re-create.



I noticed on another server with the same problem, the var_crash LV 
isn't available.  Could this be part of the problem?


  --- Logical volume ---
  LV Path    /dev/onn/var_crash
  LV Name    var_crash
  VG Name    onn
  LV UUID    X1TPMZ-XeZP-DGYv-woZW-3kvk-vWZu-XQcFhL
  LV Write Access    read/write
  LV Creation host, time node1-g7-h1.srihosting.com, 2018-04-05 
07:03:35 -0700

  LV Pool name   pool00
  LV Status  NOT available
  LV Size    10.00 GiB
  Current LE 2560
  Segments   1
  Allocation inherit
  Read ahead sectors auto



Thanks
Matt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WAV667HP5HU6IXGJTLZQ6YSMHSHTHF6M/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Oliver Riesener
Hi Yuval,

yes you are right, there was a unused and deactivated var_crash LV.

* I activated and mount it to /var/crash via /etc/fstab.
* /var/crash was empty, and LV has already ext4 fs.
  var_crashonn_ovn-monster Vwi-aotz--   10,00g 
pool002,86  
 

* Now i will try to upgrade again.
  * yum reinstall ovirt-node-ng-image-update.noarch

BTW, no more imgbased.log files found.

> Am 02.07.2018 um 20:57 schrieb Yuval Turgeman :
> 
> From your log: 
> 
> AssertionError: Path is already a volume: /var/crash
> 
> Basically, it means that you already have an LV for /var/crash but it's not 
> mounted for some reason, so either mount it (if the data good) or remove it 
> and then reinstall the image-update rpm.  Before that, check that you dont 
> have any other LVs in that same state - or you can post the output for lvs... 
> btw, do you have any more imgbased.log files laying around ?
> 
> You can find more details about this here:
> 
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade
>  
> 
> 
> On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener  > wrote:
> Hi, 
> 
> i attached my /tmp/imgbased.log
> 
> Sheers
> 
> Oliver
> 
> 
> 
>> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman > >:
>> 
>> Looks like the upgrade script failed - can you please attach 
>> /var/log/imgbased.log or /tmp/imgbased.log ?
>> 
>> Thanks,
>> Yuval.
>> 
>> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola > > wrote:
>> Yuval, can you please have a look?
>> 
>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener > >:
>> Yes, here is the same.
>> 
>> It seams the bootloader isn’t configured right ?
>>  
>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>> 
>> [root@ovn-monster ~]# nodectl info
>> layers: 
>>   ovirt-node-ng-4.2.4-0.20180626.0: 
>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>   ovirt-node-ng-4.2.3.1-0.20180530.0: 
>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>   ovirt-node-ng-4.2.3-0.20180524.0: 
>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>   ovirt-node-ng-4.2.1.1-0.20180223.0: 
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> bootloader: 
>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>   entries: 
>> ovirt-node-ng-4.2.3-0.20180524.0+1: 
>>   index: 0
>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>   kernel: 
>> /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>   args: "ro crashkernel=auto rd.lvm.lv 
>> =onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 
>> rd.lvm.lv =onn_ovn-monster/swap 
>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 
>> img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"
>>   initrd: 
>> /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1: 
>>   index: 1
>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>   kernel: 
>> /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>   args: "ro crashkernel=auto rd.lvm.lv 
>> =onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 
>> rd.lvm.lv =onn_ovn-monster/swap 
>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 
>> img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>   initrd: 
>> /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>> [root@ovn-monster ~]# uptime
>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>> 
>>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen >> >:
>>> 
>>> Hello,
>>> 
>>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node 
>>> platform and it doesn't appear the updates worked.
>>> 
>>> 
>>> [root@node6-g8-h4 ~]# yum update
>>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>>   : package_upload, product-id, search-disabled-repos, 
>>> subscription-
>>>   : manager
>>> This system is not registered with an entitlement server. You can use 
>>> subscription-manager to register.
>>> Loading mirror speeds from cached hostfile
>>>  * ovirt-4.2-epel: linux.mirrors.es.net 
>>> Resolving Dependencies
>>> --> Running transaction check
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be 
>>> updated
>>> ---> Package ovirt-node-ng-image-updat

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Btw, removing /var/crash was directed to Oliver - you have different
problems


On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen  wrote:

> Yes, it shows 8g on the VG
>
> I removed the LV for /var/crash, then installed again, and it is still
> failing on the step:
>
>
> 2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate',
> '--thin', '--virtualsize', u'53750005760B', '--name',
> 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],)
> {'close_fds': True, 'stderr': -2}
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception!   Cannot create
> new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached
> threshold.
>
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.ZYOjC'],) {}
>
>
> Thanks
>
> Matt
>
>
>
>
>
> On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
>
> Not in front of my laptop so it's a little hard to read but does it say 8g
> free on the vg ?
>
> On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:
>
>> This error adds some clarity.
>>
>> That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>>
>> How do you suggest I proceed?
>>
>> Thank you for your help,
>>
>> Matt
>>
>>
>> [root@node6-g8-h4 ~]# lvs
>>
>>   LV   VG  Attr
>> LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>>   home onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 4.79
>>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>>
>>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
>> 6.95
>>   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63
>> 50.34
>>   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>>
>>   tmp  onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 5.04
>>   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool00
>> 5.86
>>   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>>
>>   var_local_images onn_node1-g8-h4 Vwi-aotz--
>> 1.10t pool00
>> 89.72
>>   var_log  onn_node1-g8-h4 Vwi-aotz--
>> 8.00g pool00
>> 6.84
>>   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>> 2.00g pool00
>> 6.16
>> [root@node6-g8-h4 ~]# vgs
>>   VG  #PV #LV #SN Attr   VSize  VFree
>>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>>
>>
>> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//
>> ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
>> 20180626.0.el7.squashfs.img'
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
>> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
>> True, 'stderr': -2}
>> 2018-06-29 14:19

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Are you mounted with discard ? perhaps fstrim ?

On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen  wrote:

> Yes, it shows 8g on the VG
>
> I removed the LV for /var/crash, then installed again, and it is still
> failing on the step:
>
>
> 2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate',
> '--thin', '--virtualsize', u'53750005760B', '--name',
> 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],)
> {'close_fds': True, 'stderr': -2}
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception!   Cannot create
> new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached
> threshold.
>
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.ZYOjC'],) {}
>
>
> Thanks
>
> Matt
>
>
>
>
>
> On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
>
> Not in front of my laptop so it's a little hard to read but does it say 8g
> free on the vg ?
>
> On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:
>
>> This error adds some clarity.
>>
>> That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>>
>> How do you suggest I proceed?
>>
>> Thank you for your help,
>>
>> Matt
>>
>>
>> [root@node6-g8-h4 ~]# lvs
>>
>>   LV   VG  Attr
>> LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>>   home onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 4.79
>>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>>
>>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
>> 6.95
>>   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63
>> 50.34
>>   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>>
>>   tmp  onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 5.04
>>   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool00
>> 5.86
>>   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>>
>>   var_local_images onn_node1-g8-h4 Vwi-aotz--
>> 1.10t pool00
>> 89.72
>>   var_log  onn_node1-g8-h4 Vwi-aotz--
>> 8.00g pool00
>> 6.84
>>   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>> 2.00g pool00
>> 6.16
>> [root@node6-g8-h4 ~]# vgs
>>   VG  #PV #LV #SN Attr   VSize  VFree
>>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>>
>>
>> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//
>> ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
>> 20180626.0.el7.squashfs.img'
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
>> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
>> True, 'stderr': -2}
>> 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Re

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Matt Simonsen

Yes, it shows 8g on the VG

I removed the LV for /var/crash, then installed again, and it is still 
failing on the step:



2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate', 
'--thin', '--virtualsize', u'53750005760B', '--name', 
'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) 
{'close_fds': True, 'stderr': -2}
2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception!   Cannot create 
new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached 
threshold.


2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount', 
'-l', u'/tmp/mnt.ZYOjC'],) {}



Thanks

Matt





On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
Not in front of my laptop so it's a little hard to read but does it 
say 8g free on the vg ?


On Mon, Jul 2, 2018, 20:00 Matt Simonsen > wrote:


This error adds some clarity.

That said, I'm a bit unsure how the space can be the issue given I
have several hundred GB of storage in the thin pool that's unused...

How do you suggest I proceed?

Thank you for your help,

Matt



[root@node6-g8-h4 ~]# lvs

  LV   VG Attr   LSize   Pool
Origin Data%  Meta%  Move Log Cpy%Sync
Convert
  home onn_node1-g8-h4
Vwi-aotz--   1.00g pool00 4.79
  ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
<50.06g pool00 root
  ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
<50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
  ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
<50.06g pool00
  ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
<50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
  pool00   onn_node1-g8-h4 twi-aotz--
<1.30t   76.63 50.34
  root onn_node1-g8-h4 Vwi---tz--
<50.06g pool00
  tmp  onn_node1-g8-h4
Vwi-aotz--   1.00g pool00 5.04
  var  onn_node1-g8-h4 Vwi-aotz-- 
15.00g pool00 5.86
  var_crash    onn_node1-g8-h4 Vwi---tz-- 
10.00g pool00
  var_local_images onn_node1-g8-h4
Vwi-aotz--   1.10t pool00 89.72
  var_log  onn_node1-g8-h4
Vwi-aotz--   8.00g pool00 6.84
  var_log_audit    onn_node1-g8-h4
Vwi-aotz--   2.00g pool00 6.16
[root@node6-g8-h4 ~]# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g


2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:

Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
command='update', debug=True, experimental=False,
format='liveimg', stream='Image')
2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp',
'-d', '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary:
(['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {}
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
'/tmp/mnt.1OhaU/LiveOS/rootfs.img'
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp',
'-d', '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary:
(['mount', u'/tmp/mnt.1OhaU/LiveOS/rootfs.img',
u'/tmp/mnt.153do'],) {}
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],)
{'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr:
ovirt-node-ng-4.2.4-0

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
>From your log:

AssertionError: Path is already a volume: /var/crash

Basically, it means that you already have an LV for /var/crash but it's not
mounted for some reason, so either mount it (if the data good) or remove it
and then reinstall the image-update rpm.  Before that, check that you dont
have any other LVs in that same state - or you can post the output for
lvs... btw, do you have any more imgbased.log files laying around ?

You can find more details about this here:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade

On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Hi,
>
> i attached my /tmp/imgbased.log
>
> Sheers
>
> Oliver
>
>
>
> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman :
>
> Looks like the upgrade script failed - can you please attach
> /var/log/imgbased.log or /tmp/imgbased.log ?
>
> Thanks,
> Yuval.
>
> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
> wrote:
>
>> Yuval, can you please have a look?
>>
>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>>
>>> Yes, here is the same.
>>>
>>> It seams the bootloader isn’t configured right ?
>>>
>>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>>
>>> [root@ovn-monster ~]# nodectl info
>>> layers:
>>>   ovirt-node-ng-4.2.4-0.20180626.0:
>>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>>   ovirt-node-ng-4.2.3.1-0.20180530.0:
>>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>>   ovirt-node-ng-4.2.3-0.20180524.0:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   ovirt-node-ng-4.2.1.1-0.20180223.0:
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> bootloader:
>>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   entries:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1:
>>>   index: 0
>>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>>   kernel: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>>   args: "ro crashkernel=auto 
>>> rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> rd.lvm.lv=onn_ovn-monster/swap 
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587
>>> rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3
>>> -0.20180524.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>>>   index: 1
>>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir
>>> t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
>>> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>>> [root@ovn-monster ~]# uptime
>>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>>>
>>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>>>
>>> Hello,
>>>
>>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
>>> platform and it doesn't appear the updates worked.
>>>
>>>
>>> [root@node6-g8-h4 ~]# yum update
>>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>>   : package_upload, product-id, search-disabled-repos,
>>> subscription-
>>>   : manager
>>> This system is not registered with an entitlement server. You can use
>>> subscription-manager to register.
>>> Loading mirror speeds from cached hostfile
>>>  * ovirt-4.2-epel: linux.mirrors.es.net
>>> Resolving Dependencies
>>> --> Running transaction check
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
>>> updated
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
>>> obsoleting
>>> ---> Package ovirt-node-ng-image-update-placeholder.noarch
>>> 0:4.2.3.1-1.el7 will be obsoleted
>>> --> Finished Dependency Resolution
>>>
>>> Dependencies Resolved
>>>
>>> 
>>> =
>>>  Package  Arch
>>> Version Repository   Size
>>> 
>>> =
>>> Installing:
>>>  ovirt-node-ng-image-update   noarch
>>> 4.2.4-1.el7 ovirt-4.2   647 M
>>>  replacing  ovirt-node-ng-image-update-placeholder.noarch
>>> 4.2.3.1-1.el7
>>>
>>> Transaction Summary
>>> 
>>> =

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Sandro Bonazzola
2018-07-02 19:55 GMT+02:00 Yuval Turgeman :

> Not in front of my laptop so it's a little hard to read but does it say 8g
> free on the vg ?
>

Yes, it says 8G in Vfree column



>
> On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:
>
>> This error adds some clarity.
>>
>> That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>>
>> How do you suggest I proceed?
>>
>> Thank you for your help,
>>
>> Matt
>>
>>
>> [root@node6-g8-h4 ~]# lvs
>>
>>   LV   VG  Attr
>> LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>>   home onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 4.79
>>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>>
>>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
>> 6.95
>>   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63
>> 50.34
>>   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>>
>>   tmp  onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 5.04
>>   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool00
>> 5.86
>>   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>>
>>   var_local_images onn_node1-g8-h4 Vwi-aotz--
>> 1.10t pool00
>> 89.72
>>   var_log  onn_node1-g8-h4 Vwi-aotz--
>> 8.00g pool00
>> 6.84
>>   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>> 2.00g pool00
>> 6.16
>> [root@node6-g8-h4 ~]# vgs
>>   VG  #PV #LV #SN Attr   VSize  VFree
>>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>>
>>
>> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//
>> ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
>> 20180626.0.el7.squashfs.img'
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
>> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
>> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
>> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
>> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
>> True, 'stderr': -2}
>> 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
>> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr:
>> ovirt-node-ng-4.2.4-0.20180626.0
>> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/'
>> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt',
>> '--noheadings', '-o', 'SOURCE', '/'],) {}
>> 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt',
>> '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned:
>> /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1
>> 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found
>> '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'
>> 2018-06-29 14:19:31,204 [DE

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Not in front of my laptop so it's a little hard to read but does it say 8g
free on the vg ?

On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:

> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue given I have
> several hundred GB of storage in the thin pool that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV   VG  Attr   LSize
> Pool   Origin Data%  Meta%  Move Log Cpy%Sync
> Convert
>   home onn_node1-g8-h4 Vwi-aotz--   1.00g
> pool00
> 4.79
>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k <50.06g
> pool00
> root
>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz-- <50.06g
> pool00
> ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k <50.06g
> pool00
>
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- <50.06g
> pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
> 6.95
>   pool00   onn_node1-g8-h4 twi-aotz--
> <1.30t   76.63
> 50.34
>   root onn_node1-g8-h4 Vwi---tz-- <50.06g
> pool00
>
>   tmp  onn_node1-g8-h4 Vwi-aotz--   1.00g
> pool00
> 5.04
>   var  onn_node1-g8-h4 Vwi-aotz--  15.00g
> pool00
> 5.86
>   var_crashonn_node1-g8-h4 Vwi---tz--  10.00g
> pool00
>
>   var_local_images onn_node1-g8-h4 Vwi-aotz--   1.10t
> pool00
> 89.72
>   var_log  onn_node1-g8-h4 Vwi-aotz--   8.00g
> pool00
> 6.84
>   var_log_auditonn_node1-g8-h4 Vwi-aotz--   2.00g
> pool00
> 6.16
> [root@node6-g8-h4 ~]# vgs
>   VG  #PV #LV #SN Attr   VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> command='update', debug=True, experimental=False, format='liveimg',
> stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
> '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> u'/tmp/mnt.1OhaU'],) {}
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
> '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
> 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
> True, 'stderr': -2}
> 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr:
> ovirt-node-ng-4.2.4-0.20180626.0
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/'
> 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: (['findmnt',
> '--noheadings', '-o', 'SOURCE', '/'],) {}
> 2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt',
> '--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned:
> /dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1
> 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found
> '/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'
> 2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs',
> '--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name',
> u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],)

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Oliver Riesener
Hi, i attached my /tmp/imgbased.logSheersOliver

imgbased.log.gz
Description: GNU Zip compressed data
Am 02.07.2018 um 13:58 schrieb Yuval Turgeman :Looks like the upgrade script failed - can you please attach /var/log/imgbased.log or /tmp/imgbased.log ?Thanks,Yuval.On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola  wrote:Yuval, can you please have a look?2018-06-30 7:48 GMT+02:00 Oliver Riesener :Yes, here is the same.It seams the bootloader isn’t configured right ? I did the Upgrade and reboot to 4.2.4 from UI and got:[root@ovn-monster ~]# nodectl infolayers:   ovirt-node-ng-4.2.4-0.20180626.0:     ovirt-node-ng-4.2.4-0.20180626.0+1  ovirt-node-ng-4.2.3.1-0.20180530.0:     ovirt-node-ng-4.2.3.1-0.20180530.0+1  ovirt-node-ng-4.2.3-0.20180524.0:     ovirt-node-ng-4.2.3-0.20180524.0+1  ovirt-node-ng-4.2.1.1-0.20180223.0:     ovirt-node-ng-4.2.1.1-0.20180223.0+1bootloader:   default: ovirt-node-ng-4.2.3-0.20180524.0+1  entries:     ovirt-node-ng-4.2.3-0.20180524.0+1:       index: 0      title: ovirt-node-ng-4.2.3-0.20180524.0      kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64      args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"      initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img      root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1    ovirt-node-ng-4.2.1.1-0.20180223.0+1:       index: 1      title: ovirt-node-ng-4.2.1.1-0.20180223.0      kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64      args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"      initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img      root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1[root@ovn-monster ~]# uptime 07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95Am 29.06.2018 um 23:53 schrieb Matt Simonsen :Hello,I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node platform and it doesn't appear the updates worked.[root@node6-g8-h4 ~]# yum updateLoaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,  : package_upload, product-id, search-disabled-repos, subscription-  : managerThis system is not registered with an entitlement server. You can use subscription-manager to register.Loading mirror speeds from cached hostfile * ovirt-4.2-epel: linux.mirrors.es.netResolving Dependencies--> Running transaction check---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be updated---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be obsoleting---> Package ovirt-node-ng-image-update-placeholder.noarch 0:4.2.3.1-1.el7 will be obsoleted--> Finished Dependency ResolutionDependencies Resolved= Package  Arch Version Repository   Size=Installing: ovirt-node-ng-image-update   noarch 4.2.4-1.el7 ovirt-4.2   647 M replacing  ovirt-node-ng-image-update-placeholder.noarch 4.2.3.1-1.el7Transaction Summary=Install  1 PackageTotal download size: 647 MIs this ok [y/d/N]: yDownloading packages:warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID fe590cb7: NOKEYPublic key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not installedovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB  00:02:07Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2Importing GPG key 0xFE590CB7: Userid : "oVirt " Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7 Package    : ovirt-release42-4.2.3.1-1.el7.noarch (installed) From   : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2Is this ok [y/N]: yRunning transaction checkRunning transaction testTransaction test succeededRunning transaction  Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet failed, exit status 1Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-im

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Matt Simonsen

This error adds some clarity.

That said, I'm a bit unsure how the space can be the issue given I have 
several hundred GB of storage in the thin pool that's unused...


How do you suggest I proceed?

Thank you for your help,

Matt



[root@node6-g8-h4 ~]# lvs

  LV   VG  Attr LSize   
Pool   Origin Data%  Meta% Move Log Cpy%Sync 
Convert
  home onn_node1-g8-h4 Vwi-aotz--   
1.00g pool00 4.79
  ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k 
<50.06g pool00 root
  ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz-- 
<50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
  ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k 
<50.06g pool00
  ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz-- 
<50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
  pool00   onn_node1-g8-h4 twi-aotz-- 
<1.30t   76.63 50.34
  root onn_node1-g8-h4 Vwi---tz-- 
<50.06g pool00
  tmp  onn_node1-g8-h4 Vwi-aotz--   
1.00g pool00 5.04
  var  onn_node1-g8-h4 Vwi-aotz-- 
15.00g pool00 5.86
  var_crash    onn_node1-g8-h4 Vwi---tz-- 
10.00g pool00
  var_local_images onn_node1-g8-h4 Vwi-aotz--   
1.10t pool00 89.72
  var_log  onn_node1-g8-h4 Vwi-aotz--   
8.00g pool00 6.84
  var_log_audit    onn_node1-g8-h4 Vwi-aotz--   
2.00g pool00 6.16

[root@node6-g8-h4 ~]# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g


2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments: 
Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', 
command='update', debug=True, experimental=False, format='liveimg', 
stream='Image')
2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image 
'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp', 
'-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d', 
'--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}

2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount', 
'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', 
u'/tmp/mnt.1OhaU'],) {}
2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount', 
'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', 
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}

2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at 
'/tmp/mnt.1OhaU/LiveOS/rootfs.img'
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp', 
'-d', '--tmpdir', 'mnt.X'],) {}
2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d', 
'--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}

2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount', 
u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount', 
u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds': 
True, 'stderr': -2}

2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr: 
ovirt-node-ng-4.2.4-0.20180626.0

2018-06-29 14:19:31,189 [DEBUG] (MainThread) Fetching image for '/'
2018-06-29 14:19:31,189 [DEBUG] (MainThread) Calling binary: 
(['findmnt', '--noheadings', '-o', 'SOURCE', '/'],) {}
2018-06-29 14:19:31,190 [DEBUG] (MainThread) Calling: (['findmnt', 
'--noheadings', '-o', 'SOURCE', '/'],) {'close_fds': True, 'stderr': -2}
2018-06-29 14:19:31,203 [DEBUG] (MainThread) Returned: 
/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1
2018-06-29 14:19:31,204 [DEBUG] (MainThread) Found 
'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'
2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling binary: (['lvs', 
'--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', 
u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) 
{'stderr': }
2018-06-29 14:19:31,204 [DEBUG] (MainThread) Calling: (['lvs', 
'--noheadings', '--ignoreskippedcluster', '-ovg_name,lv_name', 
u'/dev/mapper/onn_node1--g8--h4-ovirt--node--ng--4.2.3.1--0.20180530.0+1'],) 
{'close_fds': True, 'stderr': 0x7f56b787eed0>}
2018-06-29 14:19:31,283 [DEBUG] (MainThre

[ovirt-users] Re: Feature: Live migration for High Performance VMs

2018-07-02 Thread Gianluca Cecchi
On Mon, Jul 2, 2018 at 4:15 PM, Sharon Gratch  wrote:

> Hi everyone,
>
> I would like to share our plan for supporting live migration for High
> Performance VMs (and in general to all VM types with pinning settings).
>

Evviva!
Greatly awaited feature for high performance VMs.

[snip]

>
> More Details on that can be found on the feature page
> 
> .
>
> Any feedback on this feature will be greatly appreciated.
>
> Thanks,
> Sharon
>

I'm going to read and understand when avalable how to test.
Thanks to you

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5I2KTYMPRUCWO3636TMMM2SL3B4SIL4O/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Sandro Bonazzola
2018-07-02 13:58 GMT+02:00 Yuval Turgeman :

> Looks like the upgrade script failed - can you please attach
> /var/log/imgbased.log or /tmp/imgbased.log ?
>

Just re-tested locally in a VM 4.2.3.1 -> 4.2.4 and it worked perfectly.


# nodectl info
layers:
  ovirt-node-ng-4.2.4-0.20180626.0:
ovirt-node-ng-4.2.4-0.20180626.0+1
  ovirt-node-ng-4.2.3.1-0.20180530.0:
ovirt-node-ng-4.2.3.1-0.20180530.0+1
bootloader:
  default: ovirt-node-ng-4.2.4-0.20180626.0+1
  entries:
ovirt-node-ng-4.2.3.1-0.20180530.0+1:
  index: 1
  title: ovirt-node-ng-4.2.3.1-0.20180530.0
  kernel:
/boot/ovirt-node-ng-4.2.3.1-0.20180530.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
  args: "ro crashkernel=auto
rd.lvm.lv=onn_host/ovirt-node-ng-4.2.3.1-0.20180530.0+1
rd.lvm.lv=onn_host/swap rhgb quiet LANG=it_IT.UTF-8
img.bootid=ovirt-node-ng-4.2.3.1-0.20180530.0+1"
  initrd:
/boot/ovirt-node-ng-4.2.3.1-0.20180530.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
  root: /dev/onn_host/ovirt-node-ng-4.2.3.1-0.20180530.0+1
ovirt-node-ng-4.2.4-0.20180626.0+1:
  index: 0
  title: ovirt-node-ng-4.2.4-0.20180626.0
  kernel:
/boot/ovirt-node-ng-4.2.4-0.20180626.0+1/vmlinuz-3.10.0-862.3.3.el7.x86_64
  args: "ro crashkernel=auto rd.lvm.lv=onn_host/swap
rd.lvm.lv=onn_host/ovirt-node-ng-4.2.4-0.20180626.0+1
rhgb quiet LANG=it_IT.UTF-8 img.bootid=ovirt-node-ng-4.2.4-0.20180626.0+1"
  initrd:
/boot/ovirt-node-ng-4.2.4-0.20180626.0+1/initramfs-3.10.0-862.3.3.el7.x86_64.img
  root: /dev/onn_host/ovirt-node-ng-4.2.4-0.20180626.0+1
current_layer: ovirt-node-ng-4.2.4-0.20180626.0+1



>
> Thanks,
> Yuval.
>
> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
> wrote:
>
>> Yuval, can you please have a look?
>>
>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>>
>>> Yes, here is the same.
>>>
>>> It seams the bootloader isn’t configured right ?
>>>
>>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>>
>>> [root@ovn-monster ~]# nodectl info
>>> layers:
>>>   ovirt-node-ng-4.2.4-0.20180626.0:
>>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>>   ovirt-node-ng-4.2.3.1-0.20180530.0:
>>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>>   ovirt-node-ng-4.2.3-0.20180524.0:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   ovirt-node-ng-4.2.1.1-0.20180223.0:
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> bootloader:
>>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   entries:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1:
>>>   index: 0
>>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>>   kernel: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>>   args: "ro crashkernel=auto 
>>> rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> rd.lvm.lv=onn_ovn-monster/swap 
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587
>>> rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3
>>> -0.20180524.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>>>   index: 1
>>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir
>>> t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
>>> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>>> [root@ovn-monster ~]# uptime
>>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>>>
>>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>>>
>>> Hello,
>>>
>>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
>>> platform and it doesn't appear the updates worked.
>>>
>>>
>>> [root@node6-g8-h4 ~]# yum update
>>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>>   : package_upload, product-id, search-disabled-repos,
>>> subscription-
>>>   : manager
>>> This system is not registered with an entitlement server. You can use
>>> subscription-manager to register.
>>> Loading mirror speeds from cached hostfile
>>>  * ovirt-4.2-epel: linux.mirrors.es.net
>>> Resolving Dependencies
>>> --> Running transaction check
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
>>> updated
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
>>> obsoleting
>>> ---> Package ovirt-node-ng-image-update-placeholder.noarch
>>> 0:4.2.3.1-1.el7 will be obsoleted
>>> --> Finished Dependency Resolution
>>>
>>> Dependencies Resolved
>>>
>>> =

[ovirt-users] Feature: Live migration for High Performance VMs

2018-07-02 Thread Sharon Gratch
Hi everyone,

I would like to share our plan for supporting live migration for High
Performance VMs (and in general to all VM types with pinning settings).


Implementing this will be done in 2 phases:
Phase 1:
1. Only manual live migration will be supported for high performance VMs
type.
2. The user will need to manually choose the destination host to migrate
his VM to (can't base on scheduler manager to find the most suitable host).
3. Source and destination hosts should have the same hardware and supports
the exact same configuration.

Phase 2:
1. Both automatic and manual live migration modes will be supported for HP
VMs.
2. Destination host can be automatically selected by the engine (scheduler
manager).
3. Source and destination hosts should fit but not be necessarily identical.

More Details on that can be found on the feature page

.

Any feedback on this feature will be greatly appreciated.

Thanks,
Sharon
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GSYET6AYQOT2IRESKHVGFZE6FS2ZTMZ7/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Looks like the upgrade script failed - can you please attach
/var/log/imgbased.log or /tmp/imgbased.log ?

Thanks,
Yuval.

On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
wrote:

> Yuval, can you please have a look?
>
> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>
>> Yes, here is the same.
>>
>> It seams the bootloader isn’t configured right ?
>>
>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>
>> [root@ovn-monster ~]# nodectl info
>> layers:
>>   ovirt-node-ng-4.2.4-0.20180626.0:
>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>   ovirt-node-ng-4.2.3.1-0.20180530.0:
>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>   ovirt-node-ng-4.2.3-0.20180524.0:
>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>   ovirt-node-ng-4.2.1.1-0.20180223.0:
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> bootloader:
>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>   entries:
>> ovirt-node-ng-4.2.3-0.20180524.0+1:
>>   index: 0
>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>   kernel: /boot/ovirt-node-ng-4.2.3-0.20
>> 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>   args: "ro crashkernel=auto 
>> rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>> rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587
>> rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3
>> -0.20180524.0+1"
>>   initrd: /boot/ovirt-node-ng-4.2.3-0.20
>> 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>>   index: 1
>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.
>> 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir
>> t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
>> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.
>> 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>> [root@ovn-monster ~]# uptime
>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>>
>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>>
>> Hello,
>>
>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
>> platform and it doesn't appear the updates worked.
>>
>>
>> [root@node6-g8-h4 ~]# yum update
>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>   : package_upload, product-id, search-disabled-repos,
>> subscription-
>>   : manager
>> This system is not registered with an entitlement server. You can use
>> subscription-manager to register.
>> Loading mirror speeds from cached hostfile
>>  * ovirt-4.2-epel: linux.mirrors.es.net
>> Resolving Dependencies
>> --> Running transaction check
>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
>> updated
>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
>> obsoleting
>> ---> Package ovirt-node-ng-image-update-placeholder.noarch
>> 0:4.2.3.1-1.el7 will be obsoleted
>> --> Finished Dependency Resolution
>>
>> Dependencies Resolved
>>
>> 
>> =
>>  Package  Arch
>> Version Repository   Size
>> 
>> =
>> Installing:
>>  ovirt-node-ng-image-update   noarch
>> 4.2.4-1.el7 ovirt-4.2   647 M
>>  replacing  ovirt-node-ng-image-update-placeholder.noarch
>> 4.2.3.1-1.el7
>>
>> Transaction Summary
>> 
>> =
>> Install  1 Package
>>
>> Total download size: 647 M
>> Is this ok [y/d/N]: y
>> Downloading packages:
>> warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima
>> ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID
>> fe590cb7: NOKEY
>> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not
>> installed
>> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB  00:02:07
>> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
>> Importing GPG key 0xFE590CB7:
>>  Userid : "oVirt "
>>  Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7
>>  Package: ovirt-release42-4.2.3.1-1.el7.noarch (installed)
>>  From   : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
>> Is this ok [y/N]: y
>> Running transaction check
>> Running transaction test
>> Transaction test succeeded
>> Running transaction
>>   Installing : ovirt-node-ng-image-u

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Sandro Bonazzola
Yuval, can you please have a look?

2018-06-30 7:48 GMT+02:00 Oliver Riesener :

> Yes, here is the same.
>
> It seams the bootloader isn’t configured right ?
>
> I did the Upgrade and reboot to 4.2.4 from UI and got:
>
> [root@ovn-monster ~]# nodectl info
> layers:
>   ovirt-node-ng-4.2.4-0.20180626.0:
> ovirt-node-ng-4.2.4-0.20180626.0+1
>   ovirt-node-ng-4.2.3.1-0.20180530.0:
> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>   ovirt-node-ng-4.2.3-0.20180524.0:
> ovirt-node-ng-4.2.3-0.20180524.0+1
>   ovirt-node-ng-4.2.1.1-0.20180223.0:
> ovirt-node-ng-4.2.1.1-0.20180223.0+1
> bootloader:
>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>   entries:
> ovirt-node-ng-4.2.3-0.20180524.0+1:
>   index: 0
>   title: ovirt-node-ng-4.2.3-0.20180524.0
>   kernel: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/vmlinuz-3.10.0-
> 862.3.2.el7.x86_64
>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/
> ovirt-node-ng-4.2.3-0.20180524.0+1 rd.lvm.lv=onn_ovn-monster/swap
> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3-0.20180524.0+1"
>   initrd: /boot/ovirt-node-ng-4.2.3-0.20180524.0+1/initramfs-3.10.0-
> 862.3.2.el7.x86_64.img
>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>   index: 1
>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/vmlinuz-3.10.0-
> 693.17.1.el7.x86_64
>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/
> ovirt-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.20180223.0+1/initramfs-3.10.0-
> 693.17.1.el7.x86_64.img
>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
> [root@ovn-monster ~]# uptime
>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>
> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>
> Hello,
>
> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
> platform and it doesn't appear the updates worked.
>
>
> [root@node6-g8-h4 ~]# yum update
> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>   : package_upload, product-id, search-disabled-repos,
> subscription-
>   : manager
> This system is not registered with an entitlement server. You can use
> subscription-manager to register.
> Loading mirror speeds from cached hostfile
>  * ovirt-4.2-epel: linux.mirrors.es.net
> Resolving Dependencies
> --> Running transaction check
> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
> updated
> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
> obsoleting
> ---> Package ovirt-node-ng-image-update-placeholder.noarch
> 0:4.2.3.1-1.el7 will be obsoleted
> --> Finished Dependency Resolution
>
> Dependencies Resolved
>
> 
> =
>  Package  Arch
> Version Repository   Size
> 
> =
> Installing:
>  ovirt-node-ng-image-update   noarch
> 4.2.4-1.el7 ovirt-4.2   647 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch
> 4.2.3.1-1.el7
>
> Transaction Summary
> 
> =
> Install  1 Package
>
> Total download size: 647 M
> Is this ok [y/d/N]: y
> Downloading packages:
> warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-
> image-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID
> fe590cb7: NOKEY
> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not
> installed
> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB  00:02:07
> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
> Importing GPG key 0xFE590CB7:
>  Userid : "oVirt "
>  Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7
>  Package: ovirt-release42-4.2.3.1-1.el7.noarch (installed)
>  From   : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
> Is this ok [y/N]: y
> Running transaction check
> Running transaction test
> Transaction test succeeded
> Running transaction
>   Installing : ovirt-node-ng-image-update-4.2.4-1.el7.noarch 1/3
> warning: %post(ovirt-node-ng-image-update-4.2.4-1.el7.noarch) scriptlet
> failed, exit status 1
> Non-fatal POSTIN scriptlet failure in rpm package
> ovirt-node-ng-image-update-4.2.4-1.el7.noarch
>   Erasing: ovirt-node-ng-image-update-placeholder-4.2.3.1-1.el7.noarch
>

[ovirt-users] Re: PollVDSCommand Error: java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils

2018-07-02 Thread Deekshith
Hi team

Can i have some best practice self hosted engine setup guide . i am cleaning up 
the os (fresh installation )  and give me some backup tips for future 
corruption issues.

 

Server model : Lenovo 3650 m5 

 

 Regards 

Deekshith

 

 

 

From: Roy Golan [mailto:rgo...@redhat.com] 
Sent: 26 June 2018 12:42
To: Deekshith
Cc: Greg Sheremeta; devel; users
Subject: Re: [ovirt-users] PollVDSCommand Error: 
java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils

 

 

On Tue, 26 Jun 2018 at 09:47 Deekshith  wrote:

What about data if i cleanup the OS ?

 

 

See here 
https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration/

 

If you want to avoid the backup/restore you can try reinstalling some of the 
packages:

   yum reinstall ovirt-engine-wildfly

 

and run engine-setup.

 

 

 Regards 

Deekshith

 

From: Roy Golan [mailto:rgo...@redhat.com] 
Sent: 26 June 2018 12:14
To: Deekshith
Cc: Greg Sheremeta; devel; users


Subject: Re: [ovirt-users] PollVDSCommand Error: 
java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils

 

 

On Tue, 26 Jun 2018 at 07:23 Deekshith  wrote:

Hi team

It was installed 1 year back , 10 days before we were not able to take the 
console and webadmin of the server, it was working only 10 minutes once 
manually restart the server, so I tried refreshing the engine (engine 
cleanup)but same issue ,so I upgraded my centos 7.2 to 7.4 then i got webadmin 
and console of the server ,now my first problem is resolved ...but host is not 
coming up

 

I recommend you do an engine-backup, clean up the os (yum update, ensure proper 
ovirt repo for you version) and do engine-setup and restore.

 

 

 

Regards 

Deekshith

From: Greg Sheremeta [mailto:gsher...@redhat.com] 
Sent: 25 June 2018 06:49
To: deeksh...@binaryindia.com
Cc: Roy Golan; devel; users; allent...@gmail.com


Subject: Re: [ovirt-users] PollVDSCommand Error: 
java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils

 

Is this a fresh install or an upgrade to 4.1?

 

On Mon, Jun 25, 2018 at 3:41 AM Deekshith  wrote:

Dear team

Engine version is 4.1 and I am able to take the console and webadmin from the 
browser. All the vm data is available in the host . But host is not coming up.

I upgrade from the NET( YUM UPDATE) all the necessary packages downloaded .. 
let me know if any log files needed. 

 

Please help me to resolve the issue .

 

 Regards 

Deekshith

 

 

From: Greg Sheremeta [mailto:gsher...@redhat.com] 
Sent: 22 June 2018 03:22
To: deeksh...@binaryindia.com
Cc: Roy Golan; devel; users; allen_j...@mrpl.co.in
Subject: Re: [ovirt-users] PollVDSCommand Error: 
java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils

 

Is your engine functioning at all? I'm not sure if you're saying your entire 
engine doesn't work, or just host deploy doesn't work.

What version of ovirt engine is this?

How did you install -- fresh installation, upgrade? From source or rpm?

 

Best wishes,

Greg

 

 

On Wed, Jun 20, 2018 at 12:35 AM Deekshith  wrote:

Dear team

Please find the attached server and host logs.. ovirt host unresponsive(down), 
unable to install node packages. 

 

Regards 

Deekshith

 

From: Roy Golan [mailto:rgo...@redhat.com] 
Sent: 19 June 2018 06:57
To: Greg Sheremeta
Cc: devel; users; deeksh...@binaryindia.com; allen_j...@mrpl.co.in
Subject: Re: [ovirt-users] PollVDSCommand Error: 
java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils

 

 

On Tue, 19 Jun 2018 at 16:04 Greg Sheremeta  wrote:

Sending to devel list.

 

Anyone ever seen this before? It sounds like a bad installation if Java classes 
are missing / classloader issues are present.

 

2018-06-18 11:23:11,287+05 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(VdsDeploy) [d6a2578b-c58f-43be-b6ad-30a3a0a57a74] EVENT_ID: 
VDS_INSTALL_IN_PROGRESS(509), Correlation ID: 
d6a2578b-c58f-43be-b6ad-30a3a0a57a74, Call Stack: null, Custom ID: null, Custom 
Event ID: -1, Message: Installing Host Node. Retrieving installation logs to: 
'/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20180618112311-mrpl.kvm2-d6a2578b-c58f-43be-b6ad-30a3a0a57a74.log'.

2018-06-18 11:23:12,131+05 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(VdsDeploy) [d6a2578b-c58f-43be-b6ad-30a3a0a57a74] EVENT_ID: 
VDS_INSTALL_IN_PROGRESS(509), Correlation ID: 
d6a2578b-c58f-43be-b6ad-30a3a0a57a74, Call Stack: null, Custom ID: null, Custom 
Event ID: -1, Message: Installing Host Node. Stage: Termination.

2018-06-18 11:23:12,193+05 INFO  
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
Connecting to mrpl.kvm2/172.31.1.32

2018-06-18 11:23:12,206+05 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] 
(org.ovirt.thread.pool-7-thread-47) [d6a2578b-c58f-43be-b6ad-30a3a0a57a74] 
Error: java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils

2018-06-18 11:23:12,206+

[ovirt-users] Re: HE + Gluster : Engine corrupted?

2018-07-02 Thread Ravishankar N



On 07/02/2018 02:15 PM, Krutika Dhananjay wrote:

Hi,

So it seems some of the files in the volume have mismatching gfids. I 
see the following logs from 15th June, ~8pm EDT:



...
...
[2018-06-16 04:00:10.264690] E [MSGID: 108008] 
[afr-self-heal-common.c:335:afr_gfid_split_brain_source] 
0-engine-replicate-0: Gfid mismatch detected for 
/hosted-engine.lockspace>, 
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and 
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.


You can use 
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/ 
(see 3. Resolution of split-brain using gluster CLI).
Nit: The doc says in the beginning that gfid split-brain cannot be fixed 
automatically but newer releases do support it, so the methods in 
section 3 should work to solve gfid split-brains.


[2018-06-16 04:00:10.265861] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4411: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:11.522600] E [MSGID: 108008] 
[afr-self-heal-common.c:212:afr_gfid_split_brain_source] 
0-engine-replicate-0: All the bricks should be up to resolve the gfid 
split barin

This is a concern. For the commands to work, all 3 bricks must be online.
Thanks,
Ravi
[2018-06-16 04:00:11.522632] E [MSGID: 108008] 
[afr-self-heal-common.c:335:afr_gfid_split_brain_source] 
0-engine-replicate-0: Gfid mismatch detected for 
/hosted-engine.lockspace>, 
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and 
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:11.523750] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4493: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:12.864393] E [MSGID: 108008] 
[afr-self-heal-common.c:212:afr_gfid_split_brain_source] 
0-engine-replicate-0: All the bricks should be up to resolve the gfid 
split barin
[2018-06-16 04:00:12.864426] E [MSGID: 108008] 
[afr-self-heal-common.c:335:afr_gfid_split_brain_source] 
0-engine-replicate-0: Gfid mismatch detected for 
/hosted-engine.lockspace>, 
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and 
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:12.865392] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4575: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:18.716007] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4657: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:20.553365] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4739: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:21.771698] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4821: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:23.871647] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4906: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:25.034780] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4987: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)

...
...


Adding Ravi who works on replicate component to hep resolve the 
mismatches.


-Krutika


On Mon, Jul 2, 2018 at 12:27 PM, Krutika Dhananjay 
mailto:kdhan...@redhat.com>> wrote:


Hi,

Sorry, I was out sick on Friday. I am looking into the logs. Will
get back to you in some time.

-Krutika

On Fri, Jun 29, 2018 at 7:47 PM, Hanson Turner
mailto:han...@andrewswireless.net>>
wrote:

Hi Krutika,

Did you need any other logs?


Thanks,

Hanson


On 06/27/2018 02:04 PM, Hanson Turner wrote:


Hi Krutika,

Looking at the email spams, it looks like it started at
8:04PM EDT on Jun 15 2018.

From my memory, I think the cluster was working fine until
sometime that night. Somewhere between midnight and the next
(Saturday) morning, the engine crashed and all vm's stopped.

I do have nightly backups that ran every night, using the
engine-backup command. Looks like my last valid backup was
2018-06-15.

I've included all logs I think might be of use. Please
forgive the use of 7zip, as the raw logs took 50mb which is
greater than my attachment limit.

I think the just of what happened, is we had a downed node
for a period of time. Earlier that day, the node was brought
back into service. Later that night or early the next
morning, the engine was gone and hopping from node to node.

I

[ovirt-users] Ovirt vs RHEV

2018-07-02 Thread Krzysztof Wajda
Hi,

can anyone explain differences between Ovirt and RHEV in version 4.1 ? I
know that RHEV has diffrent product life-cycle, and  there are some
diffrences in installation process. Do you know if  there are any
additional featuers in Ovirt or RHEV ? Do you know any oifficial statements
from RH about differences ? From my experience there is not too much
diffrences, but I need to prepere such a comaprison for bussines  so I
would like to get an info from relaiable source.


Chris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TP3MNCO3WUC5OYQPZOMUOB4GONBBQCOM/


[ovirt-users] [ANN] oVirt 4.2.5 First Release Candidate is now available

2018-07-02 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.2.5 First Release Candidate, as of July 2nd, 2018.

This update is a release candidate of the fourth in a series of
stabilization updates to the 4.2
series.
This is pre-release software. This pre-release should not to be used in
production.

This release is available now for:
* Red Hat Enterprise Linux 7.5 or later
* CentOS Linux (or similar) 7.5 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.5 or later
* CentOS Linux (or similar) 7.5 or later

See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance is available
- oVirt Node will be available soon [2]

Additional Resources:
* Read more about the oVirt 4.2.5 release highlights:
http://www.ovirt.org/release/4.2.5/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.2.5/
[2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/

-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LAZZBQI2OY76LYLBSXHPHTQNGDX26YAA/


[ovirt-users] oVirt LLDP Labeler

2018-07-02 Thread Ales Musil
Hello,

I would like to announce that oVirt LLDP Labeler is officially available.

The Labeler is service that runs along with engine and is capable of
labeling host network interfaces according to their reported VLANs via
LLDP. The attached labels are named "lldp_vlan_${VLAN}", where ${VLAN} is
ID of the corresponding VLAN. This can make work of an administrator
easier, because any network with the same label, will be automatically
attached to the corresponding host interface.

The Labeler is currently tested only with Juniper switches which are
capable of reporting all of their VLANs that are present on the interface.

We would like to extend the Labeler with auto bonding feature. Those
interfaces that are detected on the same switch would be automatically
bonded.

The Labeler source is available here:
https://github.com/almusil/ovirt-lldp-labeler
And the build:
https://copr.fedorainfracloud.org/coprs/amusil/ovirt-lldp-labeler/


If you have any suggestions or problems please don't hesitate to report
them on the GitHub page.

-- 

ALES MUSIL
Associate software engineer - rhv network

Red Hat EMEA 


amu...@redhat.com   IM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIBBRGEVHNXF3VAOLFXHTNCWZ3ZUCNZA/


[ovirt-users] Re: HE + Gluster : Engine corrupted?

2018-07-02 Thread Krutika Dhananjay
Hi,

So it seems some of the files in the volume have mismatching gfids. I see
the following logs from 15th June, ~8pm EDT:


...
...
[2018-06-16 04:00:10.264690] E [MSGID: 108008]
[afr-self-heal-common.c:335:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
/hosted-engine.lockspace>,
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:10.265861] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4411: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:11.522600] E [MSGID: 108008]
[afr-self-heal-common.c:212:afr_gfid_split_brain_source]
0-engine-replicate-0: All the bricks should be up to resolve the gfid split
barin
[2018-06-16 04:00:11.522632] E [MSGID: 108008]
[afr-self-heal-common.c:335:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
/hosted-engine.lockspace>,
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:11.523750] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4493: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:12.864393] E [MSGID: 108008]
[afr-self-heal-common.c:212:afr_gfid_split_brain_source]
0-engine-replicate-0: All the bricks should be up to resolve the gfid split
barin
[2018-06-16 04:00:12.864426] E [MSGID: 108008]
[afr-self-heal-common.c:335:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
/hosted-engine.lockspace>,
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:12.865392] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4575: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:18.716007] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4657: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:20.553365] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4739: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:21.771698] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4821: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:23.871647] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4906: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:25.034780] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4987: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
...
...


Adding Ravi who works on replicate component to hep resolve the mismatches.

-Krutika


On Mon, Jul 2, 2018 at 12:27 PM, Krutika Dhananjay 
wrote:

> Hi,
>
> Sorry, I was out sick on Friday. I am looking into the logs. Will get back
> to you in some time.
>
> -Krutika
>
> On Fri, Jun 29, 2018 at 7:47 PM, Hanson Turner  > wrote:
>
>> Hi Krutika,
>>
>> Did you need any other logs?
>>
>>
>> Thanks,
>>
>> Hanson
>>
>> On 06/27/2018 02:04 PM, Hanson Turner wrote:
>>
>> Hi Krutika,
>>
>> Looking at the email spams, it looks like it started at 8:04PM EDT on Jun
>> 15 2018.
>>
>> From my memory, I think the cluster was working fine until sometime that
>> night. Somewhere between midnight and the next (Saturday) morning, the
>> engine crashed and all vm's stopped.
>>
>> I do have nightly backups that ran every night, using the engine-backup
>> command. Looks like my last valid backup was 2018-06-15.
>>
>> I've included all logs I think might be of use. Please forgive the use of
>> 7zip, as the raw logs took 50mb which is greater than my attachment limit.
>>
>> I think the just of what happened, is we had a downed node for a period
>> of time. Earlier that day, the node was brought back into service. Later
>> that night or early the next morning, the engine was gone and hopping from
>> node to node.
>>
>> I have tried to mount the engine's hdd file to see if I could fix it.
>> There are a few corrupted partitions, and those are xfs formatted. Trying
>> to mount gives me issues about needing repaired, trying to repair gives me
>> issues about needing something cleaned first. I cannot remember exactly
>> what it was, but it wanted me to run a command that ended -L to clear out
>> the logs. I said no way and have left the engine vm in a powered down
>> state, as well as the cluster in global maintenance.
>>
>> I can see no sign of the vm booting, (ie no networking) except for what
>> I've described earlier in the VNC session.
>>
>>
>> Thanks,
>>
>> Hanson
>>
>>
>>
>> On 06/27/2018 12:04 PM, Kru

[ovirt-users] Re: oVirt Authentication and Authorization

2018-07-02 Thread Hari Prasanth Loganathan
Hi Ondra,

If my query is not clear, please let me know. I would like to explain it
with examples.

Any help is much appreciated.

Thanks,
Hari

On Fri, Jun 29, 2018 at 5:09 PM, Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:

> Thanks Ondra for the response.
>
> *This is my use case : *
>
> We have three components in our setup
>
> 1) Our Script (application using python)
> 2) Ovirt
> 3) LDAP (Also integrated to oVirt)
>
> 1) Our Python application is authenticating to LDAP and it creates a token
> for our application
> 2) For accessing the API's in oVIrt, I need to contact to the oVirt API
> which authenticates and creates a token for it
> 3) then I need to maintain the token of my application with its mapping to
> the ovirt tokenId in my application.
>
> *Difficulty :*
>
>
> *When I want to hit any oVirt API, First I perform the token check in my
> application (using my application token) then I need to perform the ovirt
> token check in oVirt using the ovirt token Id I maintain in the
> application.  *
>
> *To Achieve : *
>
> *So I want a feature, which perform authentication check only in my
> application and then from my application I need to contact the ovirt APIs
> without authentication / authorization check. I don't want ovirt to perform
> authentication / authorization check. *
>
>
>
> * 1) I would like to know Is there a way to skip the authentication and
> authorization in oVIrt? 2) Or Is it possible to point the authentication
> validation for oVirt (to my application / to some URL which I configure)
> which always return true and allow for all oVirt API's?If any thing is not
> clear I will update the mail and send you.*
>
>
>
> *Thanks *
>
>
>
>
> On Fri, Jun 29, 2018 at 5:00 PM, Ondra Machacek 
> wrote:
>
>> What's your use-case? You need all users to access without any
>> username/password? Why not rather share some username/password of guest
>> account them?
>>
>> On 06/29/2018 12:39 PM, Hari Prasanth Loganathan wrote:
>>
>>> Guys any update on this, If you have any clarification in my query
>>> please let me know.
>>>
>>> Thanks,
>>> Hari
>>>
>>> On Thu, Jun 28, 2018 at 6:19 PM, Hari Prasanth Loganathan <
>>> hariprasant...@msystechnologies.com >> hnologies.com>> wrote:
>>>
>>> Hi Team,
>>>
>>> We have three components in our setup
>>>
>>> 1) Our Script (application using python)
>>> 2) Ovirt
>>> 3) LDAP (Also integrated to oVirt)
>>>
>>> 1) Our Python application is authenticating to LDAP and it creates a
>>> token for our application
>>> 2) For accessing the API's in oVIrt, I need to contact to the oVirt
>>> API which authenticates and creates a token for it
>>> 3) then I need to maintain the token of my application with its
>>> mapping to the ovirt tokenId in my application.
>>>
>>> When I want to hit any oVirt API, First I perform the token check in
>>> my application (using my application token) then I need to perform
>>> the ovirt token check in oVirt.
>>>
>>> 1)*I would like to know Is there a way to skip the authentication
>>> and authorization in oVIrt?
>>> *
>>> 2)*Or Is it possible to point the authentication check for oVirt (to
>>> my application / to some URL which I configure) which always return
>>> true and allow for all oVirt API's?*
>>>
>>>
>>> *I did some analysis and verified the oVirt code in github,
>>> Identified that it is going via a fliter in web.xml which points to
>>> the class, Is it possible to tune this? *
>>>
>>>
>>> 
>>>  RestApiSessionValidationFilter
>>> org.ovirt.engine
>>> .core.aaa.filters.RestApiSessionValidationFilter
>>>  
>>>  
>>>  RestApiSessionValidationFilter
>>>  /*
>>>  
>>>
>>>  
>>>  SessionValidationFilter
>>> org.ovirt.engine
>>> .core.aaa.filters.SessionValidationFilter
>>>  
>>>  
>>>  SessionValidationFilter
>>>  /*
>>>  
>>>
>>>  
>>>  SsoRestApiAuthFilter
>>> org.ovirt.engine
>>> .core.aaa.filters.SsoRestApiAuthFilter
>>>  
>>>  
>>>  SsoRestApiAuthFilter
>>>  /*
>>>  
>>>
>>>  
>>>  SsoRestApiNegotiationFilter
>>> org.ovirt.engine
>>> .core.aaa.filters.SsoRestApiNegotiationFilter
>>>  
>>>  
>>>  SsoRestApiNegotiationFilter
>>>  /*
>>>  
>>>
>>> If my query is not clear, please let me know.
>>>
>>> Thanks,
>>> Hari
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>> y/about/community-guidelines/
>>> Li

[ovirt-users](v4.2.5-1.el7) Snapshots UI - html null

2018-07-02 Thread Maton, Brett
Hi,

  I'm trying to restore a VM snapshot theough the UI but keep running into
this error:

Uncaught exception occurred. Please try reloading the page. Details:
Exception caught: html is null
Please have your administrator check the UI logs

ui log attached.

CentOS 7
oVirt 4.2.5-1.el7

Regards,
Brett


ui.log
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NFV5AIZRDUCZT7P7TZHV5RUXE7XVQV6X/


[ovirt-users] Re: hyperconverged cluster - how to change the mount path?

2018-07-02 Thread Gobinda Das
You can do it by using "Manage Domain" option from Starage Domain.

On Sun, Jul 1, 2018 at 7:02 PM, Alex K  wrote:

> The steps roughly would be to put that storage domain in maintenance then
> edit/redefine it. You have the option to set gluster mount point options
> for the redundancy part. No need to set dns round robin.
>
> Alex
>
> On Sun, Jul 1, 2018, 13:29 Liebe, André-Sebastian 
> wrote:
>
>> Hi list,
>>
>> I'm looking for an advice how to change the mount point of the
>> hosted_storage due to a hostname change.
>>
>> When I set up our hyperconverged lab cluster (host1, host2, host3) I
>> populated the mount path with host3:/hosted_storage which wasn't very
>> clever as it brings in a single point of failure (i.e. when host3 is down).
>> So I thought adding a round robin dns/hosts entry (i.e. gluster1) for
>> host 1 to 3 and changing the mount path would be a better idea. But the
>> mount path entry is locked in web gui and I couldn't find any hint how to
>> change it manually (in database, shared and local configuration) in a
>> consistent way without risking the cluster.
>> So, is there a step by step guide how to achieve this without
>> reinstalling (from backup)?
>>
>>
>> Sincerely
>>
>> André-Sebastian Liebe
>> Technik / Innovation
>>
>> gematik
>> Gesellschaft für Telematikanwendungen der Gesundheitskarte mbH
>> Friedrichstraße 136
>> 10117 Berlin
>> Telefon: +49 30 40041-197
>> Telefax: +49 30 40041-111
>> E-Mail:  andre.li...@gematik.de
>> www.gematik.de
>> ___
>> Amtsgericht Berlin-Charlottenburg HRB 96351 B
>> Geschäftsführer: Alexander Beyer
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
>> message/B2R6G3VCK545RKT5BMAQ5EXO4ZFJSMFG/
>>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/QKNPBUXPIHNYN2NT63KUCYZOBZO5HUOL/
>
>


-- 
Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXUMGTNZ3KJ3UXCT53LWN7PZIKI3Y7XX/