[ovirt-users] VM - Disks - Table too small

2018-07-03 Thread Maton, Brett
The table which displays disk info is too small when moving disks between
storage domains, probably because the progress bar is added below the
'locked' status but the table doesn't resize to accommodate the taller rows.



​
Tried in Chrome, Edge, Firefox, Internet Explorer & Safari
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3RPM6QVPJRRZSSJR4F6UMMPCDFMYWTOR/


[ovirt-users] Re: Engine Setup Error

2018-07-03 Thread Sahina Bose
It looks like a problem accessing the engine gluster volume.  Can you
provide the logs from /var/log/gluster/rhev-data*engine.log as well as the
vdsm.log from the host.

On Wed, Jul 4, 2018 at 11:07 AM, Yedidyah Bar David  wrote:

> On Tue, Jul 3, 2018 at 3:28 PM, Sakhi Hadebe  wrote:
> > Hi,
> >
> > We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the
> command
> > "hosted-engine --deploy".
> >
> > After providing answers it runs the ansible script and hit the Error when
> > creating glusterfs storage domain. Attached the screenshot of the ERROR.
>
> Adding Sahina.
>
> Please check/share relevant logs from the host. Thanks.
>
> Best regards,
> --
> Didi
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SA5QSXSHIJPTXZ45JMQWDQV6OJRIKAPK/


[ovirt-users] Re: Engine Setup Error

2018-07-03 Thread Yedidyah Bar David
On Tue, Jul 3, 2018 at 3:28 PM, Sakhi Hadebe  wrote:
> Hi,
>
> We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the command
> "hosted-engine --deploy".
>
> After providing answers it runs the ansible script and hit the Error when
> creating glusterfs storage domain. Attached the screenshot of the ERROR.

Adding Sahina.

Please check/share relevant logs from the host. Thanks.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UX7ADEV3XE4D45RWTZFBHZIJFEMI6FKA/


[ovirt-users] Re: API token in postgres

2018-07-03 Thread Hari Prasanth Loganathan
Guys, any help is appreciated.

I am not able to find the table in Postgres. Please let me know.

Thanks

On Tue, 3 Jul 2018 at 9:52 PM, Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:

> Hi Guys,
>
> Could somebody help on the postgres table, Which table is used to store
> the mapping between sessionId and sso token?
>
> If my query is not clear, please let me know.
>
> Thanks,
> Hari
>
> On Tue, Jul 3, 2018 at 6:14 PM, Hari Prasanth Loganathan <
> hariprasant...@msystechnologies.com> wrote:
>
>> Hi Team,
>>
>> Which postgres table is used to store the relation between sessionId and
>> SSO token ?
>>
>> I verified the *github* :
>> https://github.com/oVirt/ovirt-engine/blob/d910a6e14bdb9fad0f21b8d9f22723f53db2fd2d/backend/manager/modules/aaa/src/main/java/org/ovirt/engine/core/aaa/filters/SsoRestApiAuthFilter.java
>>
>> *Code** :*
>>
>> QueryReturnValue queryRetVal = FiltersHelper.getBackend(ctx).
>> runPublicQuery(
>> QueryType.*GetEngineSessionIdForSsoToken*,
>> new GetEngineSessionIdForSsoTokenQueryParameters(token));
>>
>>
>>
>> Which table in postgres has the mapping between sessionId and sso token?
>> Could somebody help me on this?
>>
>> Thanks,
>> Hari
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TKKLAFAUV6X2IPCSEXBTDJWFEXWDM365/


[ovirt-users] Re: Cannot import a qcow2 image

2018-07-03 Thread Nir Soffer
On Tue, Jul 3, 2018 at 11:47 PM Nir Soffer  wrote:

> On Tue, 3 Jul 2018, 15:44 ,  wrote:
>
>> Hello,
>>
>> I' m trying without success to import a qcow2 file into ovirt. I tried on
>> a ISCSI datadomain and an nfs datadomain.
>>
>> I struggled quite a lot to have the "test connection" succed ( I write a
>> small shell script to "deploy" letsencryt certificates into ovirt engine)
>>
>> Doc is not clear on the fact that certificates for imageio-proxy are
>> different than for main engine…
>>
>>
>> Now, the upload fails with
>>
>> Transfer was stopped by system. Reason: failed to add image ticket to
>> ovirt-imageio-proxy.
>> Image gets stuck in "transfer paused by system"
>>
>> Any idea ?
>>
>
> you probably have bad cretificate configuration in the proxy. Why not use
> the default certificates generated by engine setup? This is how we test the
> proxy.
>

Can  you share the contents of:
/etc/ovirt-imageio-proxy/ovirt-imageio-proxy.conf

And the proxy log at
/var/log/ovirt-imageio-proxy/image-proxy.log
Showing the time of the error (failed to add image ticket to
ovirt-imageio-proxy.)

Nir


>
>
>> ovrit is up to date: 4.2.4 on both engine and hosts.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FTC3PBZCRRTI2LBADOPOS2EYRCZ6EQA3/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PRRW3GZYHURRFMSDSJMXWBX4DIGHXE43/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed (with solution)

2018-07-03 Thread Matt Simonsen

Many thanks to Yuval.

After moving the discussion to #ovirt, I tried "fstrim -a" and this 
allowed the upgrade to complete successfully.


Matt







On 07/03/2018 12:19 PM, Yuval Turgeman wrote:

Hi Matt,

I would try to run `fstrim -a` (man fstrim) and see if it frees 
anything from the thinpool.  If you do decide to run this, please send 
the output for lvs again.


Also, are you on #ovirt ?

Thanks,
Yuval.


On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen > wrote:


Thank you again for the assistance with this issue.

Below is the result of the command below.

In the future I am considering using different Logical RAID
Volumes to get different devices (sda, sdb, etc) for the oVirt
Node image & storage filesystem to simplify.  However I'd like to
understand why this upgrade failed and also how to correct it if
at all possible.

I believe I need to recreate the /var/crash partition? I
incorrectly removed it, is it simply a matter of using LVM to add
a new partition and format it?

Secondly, do you have any suggestions on how to move forward with
the error regarding the pool capacity? I'm not sure if this is a
legitimate error or problem in the upgrade process.

Thanks,

Matt




On 07/03/2018 03:58 AM, Yuval Turgeman wrote:

Not sure this is the problem, autoextend should be enabled for
the thinpool, `lvs -o +profile` should show imgbased-pool
(defined at /etc/lvm/profile/imgbased-pool.profile)

On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David
mailto:d...@redhat.com>> wrote:

On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen mailto:m...@khoza.com>> wrote:
>
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue
given I have several hundred GB of storage in the thin pool
that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV  VG              Attr       LSize   Pool  Origin     
                       Data% Meta%  Move Log Cpy%Sync Convert
>   home  onn_node1-g8-h4 Vwi-aotz--   1.00g pool00          
                        4.79
>   ovirt-node-ng-4.2.2-0.20180423.0    onn_node1-g8-h4
Vwi---tz-k <50.06g pool00 root
>   ovirt-node-ng-4.2.2-0.20180423.0+1  onn_node1-g8-h4
Vwi---tz-- <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0  onn_node1-g8-h4
Vri---tz-k <50.06g pool00
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4
Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>   pool00  onn_node1-g8-h4 twi-aotz--  <1.30t              
                       76.63 50.34

I think your thinpool meta volume is close to full and needs
to be enlarged.
This quite likely happened because you extended the thinpool
without
extending the meta vol.

Check also 'lvs -a'.

This might be enough, but check the names first:

lvextend -L+200m onn_node1-g8-h4/pool00_tmeta

Best regards,

>   root  onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
>   tmp   onn_node1-g8-h4 Vwi-aotz--   1.00g pool00 5.04
>   var   onn_node1-g8-h4 Vwi-aotz--  15.00g pool00 5.86
>   var_crash   onn_node1-g8-h4 Vwi---tz--  10.00g pool00
>   var_local_images  onn_node1-g8-h4 Vwi-aotz--   1.10t
pool00 89.72
>   var_log   onn_node1-g8-h4 Vwi-aotz--   8.00g pool00 6.84
>   var_log_audit   onn_node1-g8-h4 Vwi-aotz--   2.00g pool00
6.16
> [root@node6-g8-h4 ~]# vgs
>   VG              #PV #LV #SN Attr  VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version:
imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:

Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
command='update', debug=True, experimental=False,
format='liveimg', stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting
image

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling
binary: (['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {'close_fds':
True, 'stderr': -2}
> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned:
/tmp/mnt.1OhaU
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling
binary: (['mount',

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1
still exists without its base - try this:

1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
2. nodectl info

On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> I did it, with issues, see attachment.
>
>
>
>
> Am 03.07.2018 um 22:25 schrieb Yuval Turgeman :
>
> Hi Oliver,
>
> I would try the following, but please notice it is *very* dangerous, so a
> backup is probably a good idea (man vgcfgrestore)...
>
> 1. vgcfgrestore --list onn_ovn-monster
> 2. search for a .vg file that was created before deleting those 2 lvs (
> ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
> 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force
> 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
> 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
> 6. lvremove the lvs from the thinpool that are not mounted/used
> (var_crash?)
> 7. nodectl info to make sure everything is ok
> 8. reinstall the image-update rpm
>
> Thanks,
> Yuval.
>
>
>
> On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman 
> wrote:
>
>> Hi Oliver,
>>
>> The KeyError happens because there are no bases for the layers.  For each
>> LV that ends with a +1, there should be a base read-only LV without +1.  So
>> for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This is the
>> reason nodectl info fails, and the upgrade will fail also.  In your
>> original email it looks OK - I have never seen this happen, was this a
>> manual lvremove ? I need to reproduce this and check what can be done.
>>
>> You can find me on #ovirt (irc.oftc.net) also :)
>>
>>
>> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
>> oliver.riese...@hs-bremen.de> wrote:
>>
>>> Yuval, here comes the lvs output.
>>>
>>> The IO Errors are because Node is in maintenance.
>>> The LV root is from previous installed centos 7.5.
>>> The i have installed node-ng 4.2.1 and got this MIX.
>>> The LV turbo is a SSD in it’s own VG named ovirt.
>>>
>>> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
>>> because nodectl info error:
>>>
>>> KeyError: >>
>>> Now i get the error @4.2.3:
>>> [root@ovn-monster ~]# nodectl info
>>> Traceback (most recent call last):
>>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>>> "__main__", fname, loader, pkg_name)
>>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>>> exec code in run_globals
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
>>> in 
>>> CliApplication()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
>>> 200, in CliApplication
>>> return cmdmap.command(args)
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
>>> 118, in command
>>> return self.commands[command](**kwargs)
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
>>> in info
>>> Info(self.imgbased, self.machine).write()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
>>> __init__
>>> self._fetch_information()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
>>> _fetch_information
>>> self._get_layout()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
>>> _get_layout
>>> layout = LayoutParser(self.app.imgbase.layout()).parse()
>>>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line
>>> 155, in layout
>>> return self.naming.layout()
>>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
>>> in layout
>>> tree = self.tree(lvs)
>>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
>>> in tree
>>> bases[img.base.nvr].layers.append(img)
>>> KeyError: 
>>>
>>> lvs -a
>>>
>>> [root@ovn-monster ~]# lvs -a
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 0: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 5497568559104: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 5497568616448: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 4096: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>>> 4096 at 0: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>>> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>>> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>>> 4096 at 4096: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>>> 4096 at 0: Eingabe-/Ausgabefehler
>>>   /dev/ma

[ovirt-users] Re: Cannot import a qcow2 image

2018-07-03 Thread Nir Soffer
On Tue, 3 Jul 2018, 15:44 ,  wrote:

> Hello,
>
> I' m trying without success to import a qcow2 file into ovirt. I tried on
> a ISCSI datadomain and an nfs datadomain.
>
> I struggled quite a lot to have the "test connection" succed ( I write a
> small shell script to "deploy" letsencryt certificates into ovirt engine)
>
> Doc is not clear on the fact that certificates for imageio-proxy are
> different than for main engine…
>
>
> Now, the upload fails with
>
> Transfer was stopped by system. Reason: failed to add image ticket to
> ovirt-imageio-proxy.
> Image gets stuck in "transfer paused by system"
>
> Any idea ?
>

you probably have bad cretificate configuration in the proxy. Why not use
the default certificates generated by engine setup? This is how we test the
proxy.


> ovrit is up to date: 4.2.4 on both engine and hosts.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FTC3PBZCRRTI2LBADOPOS2EYRCZ6EQA3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TJK43FANHYWQYMZWNZZXPZTMBSNY2FCP/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Hi Oliver,

I would try the following, but please notice it is *very* dangerous, so a
backup is probably a good idea (man vgcfgrestore)...

1. vgcfgrestore --list onn_ovn-monster
2. search for a .vg file that was created before deleting those 2 lvs (
ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force
4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?)
7. nodectl info to make sure everything is ok
8. reinstall the image-update rpm

Thanks,
Yuval.



On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman  wrote:

> Hi Oliver,
>
> The KeyError happens because there are no bases for the layers.  For each
> LV that ends with a +1, there should be a base read-only LV without +1.  So
> for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This is the
> reason nodectl info fails, and the upgrade will fail also.  In your
> original email it looks OK - I have never seen this happen, was this a
> manual lvremove ? I need to reproduce this and check what can be done.
>
> You can find me on #ovirt (irc.oftc.net) also :)
>
>
> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
> oliver.riese...@hs-bremen.de> wrote:
>
>> Yuval, here comes the lvs output.
>>
>> The IO Errors are because Node is in maintenance.
>> The LV root is from previous installed centos 7.5.
>> The i have installed node-ng 4.2.1 and got this MIX.
>> The LV turbo is a SSD in it’s own VG named ovirt.
>>
>> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
>> because nodectl info error:
>>
>> KeyError: >
>> Now i get the error @4.2.3:
>> [root@ovn-monster ~]# nodectl info
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>> "__main__", fname, loader, pkg_name)
>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>> exec code in run_globals
>>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
>> in 
>> CliApplication()
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
>> in CliApplication
>> return cmdmap.command(args)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
>> in command
>> return self.commands[command](**kwargs)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
>> in info
>> Info(self.imgbased, self.machine).write()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
>> __init__
>> self._fetch_information()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
>> _fetch_information
>> self._get_layout()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
>> _get_layout
>> layout = LayoutParser(self.app.imgbase.layout()).parse()
>>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155,
>> in layout
>> return self.naming.layout()
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
>> in layout
>> tree = self.tree(lvs)
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
>> in tree
>> bases[img.base.nvr].layers.append(img)
>> KeyError: 
>>
>> lvs -a
>>
>> [root@ovn-monster ~]# lvs -a
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 0: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 5497568559104: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 5497568616448: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 4096: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 0: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 4096: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>> 4096 at 0: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
>> 4096 at 4096: Eingabe-/Ausgabefehler
>>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after
>> 0 of 4096 at 0: Eingabe-/Ausgabefehler
>>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Hi Oliver,

The KeyError happens because there are no bases for the layers.  For each
LV that ends with a +1, there should be a base read-only LV without +1.  So
for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This is the
reason nodectl info fails, and the upgrade will fail also.  In your
original email it looks OK - I have never seen this happen, was this a
manual lvremove ? I need to reproduce this and check what can be done.

You can find me on #ovirt (irc.oftc.net) also :)


On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Yuval, here comes the lvs output.
>
> The IO Errors are because Node is in maintenance.
> The LV root is from previous installed centos 7.5.
> The i have installed node-ng 4.2.1 and got this MIX.
> The LV turbo is a SSD in it’s own VG named ovirt.
>
> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
> because nodectl info error:
>
> KeyError: 
> Now i get the error @4.2.3:
> [root@ovn-monster ~]# nodectl info
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
> in 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
> in CliApplication
> return cmdmap.command(args)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
> in command
> return self.commands[command](**kwargs)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
> in info
> Info(self.imgbased, self.machine).write()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
> __init__
> self._fetch_information()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
> _fetch_information
> self._get_layout()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
> _get_layout
> layout = LayoutParser(self.app.imgbase.layout()).parse()
>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155,
> in layout
> return self.naming.layout()
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
> in layout
> tree = self.tree(lvs)
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
> in tree
> bases[img.base.nvr].layers.append(img)
> KeyError: 
>
> lvs -a
>
> [root@ovn-monster ~]# lvs -a
>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
> 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
> 4096 at 5497568559104: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
> 4096 at 5497568616448: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
> 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
> 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
> 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
> 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>   /dev/mapper/36090a02860eea13dc5aed55e4cc57698: read failed after 0 of
> 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0
> of 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0
> of 4096 at 536805376: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0
> of 4096 at 536862720: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/metadata: read failed after 0
> of 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of
> 4096 at 0: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of
> 4096 at 134152192: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of
> 4096 at 134209536: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/ids: read failed after 0 of
> 4096 at 4096: Eingabe-/Ausgabefehler
>   /dev/675cb45d-3746-4f3b-b9ee-516612da50e5/leases: read failed after 0
> of 4096 

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Hi Matt,

I would try to run `fstrim -a` (man fstrim) and see if it frees anything
from the thinpool.  If you do decide to run this, please send the output
for lvs again.

Also, are you on #ovirt ?

Thanks,
Yuval.


On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen  wrote:

> Thank you again for the assistance with this issue.
>
> Below is the result of the command below.
>
> In the future I am considering using different Logical RAID Volumes to get
> different devices (sda, sdb, etc) for the oVirt Node image & storage
> filesystem to simplify.  However I'd like to understand why this upgrade
> failed and also how to correct it if at all possible.
>
> I believe I need to recreate the /var/crash partition? I incorrectly
> removed it, is it simply a matter of using LVM to add a new partition and
> format it?
>
> Secondly, do you have any suggestions on how to move forward with the
> error regarding the pool capacity? I'm not sure if this is a legitimate
> error or problem in the upgrade process.
>
> Thanks,
>
> Matt
>
>
>
>
> On 07/03/2018 03:58 AM, Yuval Turgeman wrote:
>
> Not sure this is the problem, autoextend should be enabled for the
> thinpool, `lvs -o +profile` should show imgbased-pool (defined at
> /etc/lvm/profile/imgbased-pool.profile)
>
> On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David 
> wrote:
>
>> On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen  wrote:
>> >
>> > This error adds some clarity.
>> >
>> > That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>> >
>> > How do you suggest I proceed?
>> >
>> > Thank you for your help,
>> >
>> > Matt
>> >
>> >
>> >
>> > [root@node6-g8-h4 ~]# lvs
>> >
>> >   LV   VG  Attr
>>  LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>> >   home onn_node1-g8-h4 Vwi-aotz--
>>  1.00g pool004.79
>> >   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>> >   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>> >   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>> >   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>> >   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63  50.34
>>
>> I think your thinpool meta volume is close to full and needs to be
>> enlarged.
>> This quite likely happened because you extended the thinpool without
>> extending the meta vol.
>>
>> Check also 'lvs -a'.
>>
>> This might be enough, but check the names first:
>>
>> lvextend -L+200m onn_node1-g8-h4/pool00_tmeta
>>
>> Best regards,
>>
>> >   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>> >   tmp  onn_node1-g8-h4 Vwi-aotz--
>>  1.00g pool005.04
>> >   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool005.86
>> >   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>> >   var_local_images onn_node1-g8-h4 Vwi-aotz--
>>  1.10t pool0089.72
>> >   var_log  onn_node1-g8-h4 Vwi-aotz--
>>  8.00g pool006.84
>> >   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>>  2.00g pool006.16
>> > [root@node6-g8-h4 ~]# vgs
>> >   VG  #PV #LV #SN Attr   VSize  VFree
>> >   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>> >
>> >
>> > 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> > 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-
>> node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> > 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180
>> 626.0.el7.squashfs.img'
>> > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary:
>> (['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
>> > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> > 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> '/usr/share/ovirt-node

[ovirt-users] Re: Fresh 4.2 install: engine-setup skipping NFS share step

2018-07-03 Thread Dave Mintz
Thank you. 

Dave

> On Jul 1, 2018, at 2:28 AM, Yedidyah Bar David  wrote:
> 
>> On Sat, Jun 30, 2018 at 9:33 PM,   wrote:
>> Any idea why the install script is not asking me if I want to set up the 
>> local ISO nfs share?  I thought it might be because I am using the entire 
>> disk, but I tried creating a separate partition/mount for ti and it still 
>> didn't work.
> 
> It was deprecated and disabled (I'd like to say "removed", but it's not, yet):
> 
> https://bugzilla.redhat.com/1332813
> 
> Sorry if it disrupts your habits, but setting the ISO/NFS really
> should never have happened originally, IMHO. Feel free to find/write
> some script to do this automatically and use that. Feel free to reuse
> the existing python code from engine-setup, although it wasn't written
> to be reused standalone so probably isn't the best choice.
> 
> If you had some deeper reason to want/need this ISO/NFS domain, other
> than convenience, perhaps provide more details? Or open an RFE...
> 
> Best regards,
> 
>> 
>> This is what I see:
>> --== APACHE CONFIGURATION ==--
>> 
>>  Setup can configure the default page of the web server to present 
>> the application home page. This may conflict with existing applications.
>>  Do you wish to set the application as the default page of the web 
>> server? (Yes, No) [Yes]:
>>  Setup can configure apache to use SSL using a certificate issued 
>> from the internal CA.
>>  Do you wish Setup to configure that, or prefer to perform that 
>> manually? (Automatic, Manual) [Automatic]:
>> 
>>  --== SYSTEM CONFIGURATION ==--
>> 
>> [The NFS questions should be here but it is blank]
>> 
>>  --== MISC CONFIGURATION ==--
>> 
>>  Please choose Data Warehouse sampling scale:
>>  (1) Basic
>>  (2) Full
>> 
>> Thanks in advance.
>> 
>> Dave
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YBIETA2OEIIUR2BZ4UK3ZWAAIPHBB66S/
> 
> 
> 
> -- 
> Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SEDFEIHYEVGNG5C3HHWYL5HDOCAMNRKE/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Matt Simonsen

Thank you again for the assistance with this issue.

Below is the result of the command below.

In the future I am considering using different Logical RAID Volumes to 
get different devices (sda, sdb, etc) for the oVirt Node image & storage 
filesystem to simplify.  However I'd like to understand why this upgrade 
failed and also how to correct it if at all possible.


I believe I need to recreate the /var/crash partition? I incorrectly 
removed it, is it simply a matter of using LVM to add a new partition 
and format it?


Secondly, do you have any suggestions on how to move forward with the 
error regarding the pool capacity? I'm not sure if this is a legitimate 
error or problem in the upgrade process.


Thanks,

Matt




On 07/03/2018 03:58 AM, Yuval Turgeman wrote:
Not sure this is the problem, autoextend should be enabled for the 
thinpool, `lvs -o +profile` should show imgbased-pool (defined at 
/etc/lvm/profile/imgbased-pool.profile)


On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David > wrote:


On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen mailto:m...@khoza.com>> wrote:
>
> This error adds some clarity.
>
> That said, I'm a bit unsure how the space can be the issue given
I have several hundred GB of storage in the thin pool that's unused...
>
> How do you suggest I proceed?
>
> Thank you for your help,
>
> Matt
>
>
>
> [root@node6-g8-h4 ~]# lvs
>
>   LV                                   VG   Attr       LSize 
 Pool   Origin      Data%  Meta%  Move Log Cpy%Sync Convert
>   home  onn_node1-g8-h4 Vwi-aotz--   1.00g pool00              
      4.79
>   ovirt-node-ng-4.2.2-0.20180423.0  onn_node1-g8-h4 Vwi---tz-k
<50.06g pool00 root
>   ovirt-node-ng-4.2.2-0.20180423.0+1  onn_node1-g8-h4 Vwi---tz--
<50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>   ovirt-node-ng-4.2.3.1-0.20180530.0  onn_node1-g8-h4 Vri---tz-k
<50.06g pool00
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4
Vwi-aotz-- <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>   pool00  onn_node1-g8-h4 twi-aotz--  <1.30t                    
   76.63  50.34

I think your thinpool meta volume is close to full and needs to be
enlarged.
This quite likely happened because you extended the thinpool without
extending the meta vol.

Check also 'lvs -a'.

This might be enough, but check the names first:

lvextend -L+200m onn_node1-g8-h4/pool00_tmeta

Best regards,

>   root  onn_node1-g8-h4 Vwi---tz-- <50.06g pool00
>   tmp onn_node1-g8-h4 Vwi-aotz--   1.00g pool00                
    5.04
>   var onn_node1-g8-h4 Vwi-aotz--  15.00g pool00                
    5.86
>   var_crash onn_node1-g8-h4 Vwi---tz--  10.00g pool00
>   var_local_images  onn_node1-g8-h4 Vwi-aotz--   1.10t pool00  
                    89.72
>   var_log onn_node1-g8-h4 Vwi-aotz--   8.00g pool00            
        6.84
>   var_log_audit onn_node1-g8-h4 Vwi-aotz--   2.00g pool00      
              6.16
> [root@node6-g8-h4 ~]# vgs
>   VG              #PV #LV #SN Attr   VSize  VFree
>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>
>
> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version:
imgbased-1.0.20
> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:

Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
command='update', debug=True, experimental=False,
format='liveimg', stream='Image')
> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img'
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {'close_fds': True,
'stderr': -2}
> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned:
/tmp/mnt.1OhaU
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary:
(['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {}
> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',

'/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
> 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
'/tmp/mnt.1OhaU/LiveOS/rootfs.img'
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary:
(['mktemp', '-d', '--tmpdir', 'mnt.X'],) {}
> 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling:
(['mktemp', '-d', '--tmpdir', 'mnt.X']

[ovirt-users] Re: API token in postgres

2018-07-03 Thread Hari Prasanth Loganathan
Hi Guys,

Could somebody help on the postgres table, Which table is used to store the
mapping between sessionId and sso token?

If my query is not clear, please let me know.

Thanks,
Hari

On Tue, Jul 3, 2018 at 6:14 PM, Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:

> Hi Team,
>
> Which postgres table is used to store the relation between sessionId and
> SSO token ?
>
> I verified the *github* : https://github.com/oVirt/ovirt-engine/blob/
> d910a6e14bdb9fad0f21b8d9f22723f53db2fd2d/backend/manager/
> modules/aaa/src/main/java/org/ovirt/engine/core/aaa/filters/
> SsoRestApiAuthFilter.java
>
> *Code** :*
>
> QueryReturnValue queryRetVal = FiltersHelper.getBackend(ctx).
> runPublicQuery(
> QueryType.*GetEngineSessionIdForSsoToken*,
> new GetEngineSessionIdForSsoTokenQueryParameters(token));
>
>
>
> Which table in postgres has the mapping between sessionId and sso token?
> Could somebody help me on this?
>
> Thanks,
> Hari
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WGQSOOQ5TYBZPDHZOJZ6HDYPG5H4GU45/


[ovirt-users] Re: HE + Gluster : Engine corrupted?

2018-07-03 Thread Hanson Turner

Hi Ravishankar,

This doesn't look like split-brain...

[root@ovirtnode1 ~]# gluster volume heal engine info
Brick ovirtnode1:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick ovirtnode3:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick ovirtnode4:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

[root@ovirtnode1 ~]# gluster volume heal engine info split-brain
Brick ovirtnode1:/gluster_bricks/engine/engine
Status: Connected
Number of entries in split-brain: 0

Brick ovirtnode3:/gluster_bricks/engine/engine
Status: Connected
Number of entries in split-brain: 0

Brick ovirtnode4:/gluster_bricks/engine/engine
Status: Connected
Number of entries in split-brain: 0

[root@ovirtnode1 ~]# gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: c8dc1b04-bc25-4e97-81bb-4d94929918b1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirtnode1:/gluster_bricks/engine/engine
Brick2: ovirtnode3:/gluster_bricks/engine/engine
Brick3: ovirtnode4:/gluster_bricks/engine/engine

Thanks,

Hanson


On 07/02/2018 07:09 AM, Ravishankar N wrote:




On 07/02/2018 02:15 PM, Krutika Dhananjay wrote:

Hi,

So it seems some of the files in the volume have mismatching gfids. I 
see the following logs from 15th June, ~8pm EDT:



...
...
[2018-06-16 04:00:10.264690] E [MSGID: 108008] 
[afr-self-heal-common.c:335:afr_gfid_split_brain_source] 
0-engine-replicate-0: Gfid mismatch detected for 
/hosted-engine.lockspace>, 
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and 
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.


You can use 
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/ 
(see 3. Resolution of split-brain using gluster CLI).
Nit: The doc says in the beginning that gfid split-brain cannot be 
fixed automatically but newer releases do support it, so the methods 
in section 3 should work to solve gfid split-brains.


[2018-06-16 04:00:10.265861] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4411: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:11.522600] E [MSGID: 108008] 
[afr-self-heal-common.c:212:afr_gfid_split_brain_source] 
0-engine-replicate-0: All the bricks should be up to resolve the gfid 
split barin

This is a concern. For the commands to work, all 3 bricks must be online.
Thanks,
Ravi
[2018-06-16 04:00:11.522632] E [MSGID: 108008] 
[afr-self-heal-common.c:335:afr_gfid_split_brain_source] 
0-engine-replicate-0: Gfid mismatch detected for 
/hosted-engine.lockspace>, 
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and 
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:11.523750] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4493: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:12.864393] E [MSGID: 108008] 
[afr-self-heal-common.c:212:afr_gfid_split_brain_source] 
0-engine-replicate-0: All the bricks should be up to resolve the gfid 
split barin
[2018-06-16 04:00:12.864426] E [MSGID: 108008] 
[afr-self-heal-common.c:335:afr_gfid_split_brain_source] 
0-engine-replicate-0: Gfid mismatch detected for 
/hosted-engine.lockspace>, 
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and 
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:12.865392] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4575: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:18.716007] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4657: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:20.553365] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4739: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:21.771698] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4821: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:23.871647] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4906: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)
[2018-06-16 04:00:25.034780] W [fuse-bridge.c:540:fuse_entry_cbk] 
0-glusterfs-fuse: 4987: LOOKUP() 
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace 
=> -1 (Input/output error)

...
...


Adding Ravi who works on replicate component to hep resolve the 
mismatches.


-Krutika


On Mon, Jul 2, 2018 at 12:27 PM, Krutika Dhananjay 
mailto:kdhan...@redhat.com>> wrote:


Hi,

Sorry, I was out sick on Friday. I am looking into the logs. Will
get back to you in some time.

-Krutika

On 

[ovirt-users] Re: (v4.2.5-1.el7) Snapshots UI - html null

2018-07-03 Thread Maton, Brett
Actually the extra nic is assigned to network 'Empty' in the edit VM form,
and is throwing the html null error in the snapshots form/view

On 3 July 2018 at 14:26, Maton, Brett  wrote:

> I think the issue is being caused by a missing network.
>
> One of the upgrades of my test oVirt cluster went sideways and I endedup
> reinstalling from fresh and importing the storage domains from the preivous
> cluster.
> I haven't created all of the networks that were in the previous ovirt
> install as they're not really needed at the moment.
>
> The vm's that are throwing the html null error when trying to view
> snapshots have a secondary nic that isn't assigned to any network.
>
> Regards,
> Brett
>
>
> On 2 July 2018 at 08:04, Maton, Brett  wrote:
>
>> Hi,
>>
>>   I'm trying to restore a VM snapshot theough the UI but keep running
>> into this error:
>>
>> Uncaught exception occurred. Please try reloading the page. Details:
>> Exception caught: html is null
>> Please have your administrator check the UI logs
>>
>> ui log attached.
>>
>> CentOS 7
>> oVirt 4.2.5-1.el7
>>
>> Regards,
>> Brett
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZIKNNUVCYDU5INHD5AUC2FGKF2FLTT5G/


[ovirt-users] Re: (v4.2.5-1.el7) Snapshots UI - html null

2018-07-03 Thread Maton, Brett
I think the issue is being caused by a missing network.

One of the upgrades of my test oVirt cluster went sideways and I endedup
reinstalling from fresh and importing the storage domains from the preivous
cluster.
I haven't created all of the networks that were in the previous ovirt
install as they're not really needed at the moment.

The vm's that are throwing the html null error when trying to view
snapshots have a secondary nic that isn't assigned to any network.

Regards,
Brett


On 2 July 2018 at 08:04, Maton, Brett  wrote:

> Hi,
>
>   I'm trying to restore a VM snapshot theough the UI but keep running into
> this error:
>
> Uncaught exception occurred. Please try reloading the page. Details:
> Exception caught: html is null
> Please have your administrator check the UI logs
>
> ui log attached.
>
> CentOS 7
> oVirt 4.2.5-1.el7
>
> Regards,
> Brett
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5MOOU3AO2QL2TLARYJGQANIPL3EK2VXM/


[ovirt-users] API token in postgres

2018-07-03 Thread Hari Prasanth Loganathan
Hi Team,

Which postgres table is used to store the relation between sessionId and
SSO token ?

I verified the *github* :
https://github.com/oVirt/ovirt-engine/blob/d910a6e14bdb9fad0f21b8d9f22723f53db2fd2d/backend/manager/modules/aaa/src/main/java/org/ovirt/engine/core/aaa/filters/SsoRestApiAuthFilter.java

*Code** :*

QueryReturnValue queryRetVal = FiltersHelper.getBackend(ctx).runPublicQuery(
QueryType.*GetEngineSessionIdForSsoToken*,
new GetEngineSessionIdForSsoTokenQueryParameters(token));



Which table in postgres has the mapping between sessionId and sso token?
Could somebody help me on this?

Thanks,
Hari
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/527X5NQMSM252VABZBAWJYKMAZ7Z3FLM/


[ovirt-users] Cannot import a qcow2 image

2018-07-03 Thread etienne . charlier
Hello,

I' m trying without success to import a qcow2 file into ovirt. I tried on a 
ISCSI datadomain and an nfs datadomain.

I struggled quite a lot to have the "test connection" succed ( I write a small 
shell script to "deploy" letsencryt certificates into ovirt engine)

Doc is not clear on the fact that certificates for imageio-proxy are different 
than for main engine…


Now, the upload fails with

Transfer was stopped by system. Reason: failed to add image ticket to 
ovirt-imageio-proxy.
Image gets stuck in "transfer paused by system"

Any idea ?

ovrit is up to date: 4.2.4 on both engine and hosts.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FTC3PBZCRRTI2LBADOPOS2EYRCZ6EQA3/


[ovirt-users] Engine Setup Error

2018-07-03 Thread Sakhi Hadebe
Hi,

We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the command
"hosted-engine --deploy".

After providing answers it runs the ansible script and hit the Error when
creating glusterfs storage domain. Attached the screenshot of the ERROR.

Please help.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XN5ML4VTDL6BDAAFFBGFXI5KEEZDMGNK/


[ovirt-users] Re: Antwort: Re: vacuumdb: could not connect to database ovirt_engine_history

2018-07-03 Thread Shirly Radco
Hi Didi,

Can you please have a look?

Thanks,

--

SHIRLY RADCO

BI SeNIOR SOFTWARE ENGINEER

Red Hat Israel 

TRIED. TESTED. TRUSTED. 

On Tue, Jul 3, 2018 at 2:20 PM,  wrote:

> Hi Shirly,
>
> here you are:
>
> --snip
> ovirt_engine_history=# SELECT * FROM history_configuration;
>  var_name  | var_value |  var_datetime
> ---+---+
>  default_language  | en_US |
>  firstSync | false | 2018-04-09 15:52:00+02
>  MinimalETLVersion | 4.2.0 |
>  lastHourAggr  |   | 2018-07-03 12:00:00+02
>  HourlyAggFailed   | false |
>  lastDayAggr   |   | 2018-07-02 00:00:00+02
> (6 rows)
>
> ovirt_engine_history=#
> --snap
>
> date
> Tue Jul  3 13:16:27 CEST 2018
>
> Thanky you,
> Ema
>
>
>
>
> Von:"Shirly Radco" 
> An:emanuel.santosvar...@mahle.com,
> Kopie:"Roy Golan" , "users" 
> Datum:03.07.2018 12:37
> Betreff:Re: [ovirt-users] Antwort: Re: vacuumdb: could not
> connect to database ovirt_engine_history
> --
>
>
>
> Hi,
>
> Please share your history_configuration table content
> in ovirt_engine_history db:
>
> SELECT * FROM history_configuration;
>
> the date and time you run this query
>
> and the ovirt_engine_dwh.log
>
>
>
>
>
> --
> *SHIRLY RADCO*
> BI SeNIOR SOFTWARE ENGINEER
> Red Hat Israel 
>
>  *TRIED. TESTED. TRUSTED.*
> 
>
>
> On Fri, Jun 29, 2018 at 9:06 AM, <*emanuel.santosvar...@mahle.com*
> > wrote:
> Hi Roy, well db is alive :
>
>
> su - postgres -c 'scl enable rh-postgresql95 -- psql ovirt_engine_history'
> psql (9.5.9)
> Type "help" for help.
>
> ovirt_engine_history=# \dt
>  List of relations
>  Schema |   Name| Type  |Owner
> +---+---+---
> ---
>  public | calendar  | table | ovirt_engine_history
>  public | cluster_configuration | table | ovirt_engine_history
>  public | datacenter_configuration  | table | ovirt_engine_history
>  public | datacenter_storage_domain_map | table | ovirt_engine_history
>  public | enum_translator   | table | ovirt_engine_history
>  public | history_configuration | table | ovirt_engine_history
>  public | host_configuration| table | ovirt_engine_history
>  public | host_daily_history| table | ovirt_engine_history
>  public | host_hourly_history   | table | ovirt_engine_history
>  public | host_interface_configuration  | table | ovirt_engine_history
>  public | host_interface_daily_history  | table | ovirt_engine_history
>  public | host_interface_hourly_history | table | ovirt_engine_history
>  public | host_interface_samples_history| table | ovirt_engine_history
>  public | host_samples_history  | table | ovirt_engine_history
>  public | schema_version| table | ovirt_engine_history
>  public | statistics_vms_users_usage_daily  | table | ovirt_engine_history
>  public | statistics_vms_users_usage_hourly | table | ovirt_engine_history
>  public | storage_domain_configuration  | table | ovirt_engine_history
>  public | storage_domain_daily_history  | table | ovirt_engine_history
>  public | storage_domain_hourly_history | table | ovirt_engine_history
>  public | storage_domain_samples_history| table | ovirt_engine_history
>  public | tag_details   | table | ovirt_engine_history
>  public | tag_relations_history | table | ovirt_engine_history
>  public | users_details_history | table | ovirt_engine_history
>  public | vm_configuration  | table | ovirt_engine_history
>  public | vm_daily_history  | table | ovirt_engine_history
>  public | vm_device_history | table | ovirt_engine_history
>  public | vm_disk_configuration | table | ovirt_engine_history
>  public | vm_disk_daily_history | table | ovirt_engine_history
>  public | vm_disk_hourly_history| table | ovirt_engine_history
>  public | vm_disk_samples_history   | table | ovirt_engine_history
>  public | vm_disks_usage_daily_history  | table | ovirt_engine_history
>  public | vm_disks_usage_hourly_history | table | ovirt_engine_history
>  public | vm_disks_usage_samples_history| table | ovirt_engine_history
>  public | vm_hourly_history | table | ovirt_engine_history
>  public | vm_interface_configuration| table | ovirt_engine_history
>  public | vm_interface_daily_history| table | ovirt_engine_history
>  public | vm_interface_hourly_history   | table | ovirt_engine_history
>  public | vm_interface_samples_history  | tab

[ovirt-users] Antwort: Re: Antwort: Re: vacuumdb: could not connect to database ovirt_engine_history

2018-07-03 Thread emanuel . santosvarina
Hi Shirly,

here you are:

--snip
ovirt_engine_history=# SELECT * FROM history_configuration;
 var_name  | var_value |  var_datetime 
---+---+
 default_language  | en_US | 
 firstSync | false | 2018-04-09 15:52:00+02
 MinimalETLVersion | 4.2.0 | 
 lastHourAggr  |   | 2018-07-03 12:00:00+02
 HourlyAggFailed   | false | 
 lastDayAggr   |   | 2018-07-02 00:00:00+02
(6 rows)

ovirt_engine_history=# 
--snap

date
Tue Jul  3 13:16:27 CEST 2018

Thanky you,
Ema




Von:"Shirly Radco" 
An: emanuel.santosvar...@mahle.com, 
Kopie:  "Roy Golan" , "users" 
Datum:  03.07.2018 12:37
Betreff:Re: [ovirt-users] Antwort: Re: vacuumdb: could not connect 
to database ovirt_engine_history



Hi,

Please share your history_configuration table content 
in ovirt_engine_history db:

SELECT * FROM history_configuration;

the date and time you run this query

and the ovirt_engine_dwh.log





--
SHIRLY RADCO
BI SeNIOR SOFTWARE ENGINEER
Red Hat Israel


TRIED. TESTED. TRUSTED.

On Fri, Jun 29, 2018 at 9:06 AM,  wrote:
Hi Roy, well db is alive : 


su - postgres -c 'scl enable rh-postgresql95 -- psql ovirt_engine_history' 

psql (9.5.9) 
Type "help" for help. 

ovirt_engine_history=# \dt 
 List of relations 
 Schema |   Name| Type  |Owner 

+---+---+-- 

 public | calendar  | table | ovirt_engine_history 

 public | cluster_configuration | table | ovirt_engine_history 

 public | datacenter_configuration  | table | ovirt_engine_history 

 public | datacenter_storage_domain_map | table | ovirt_engine_history 

 public | enum_translator   | table | ovirt_engine_history 

 public | history_configuration | table | ovirt_engine_history 

 public | host_configuration| table | ovirt_engine_history 

 public | host_daily_history| table | ovirt_engine_history 

 public | host_hourly_history   | table | ovirt_engine_history 

 public | host_interface_configuration  | table | ovirt_engine_history 

 public | host_interface_daily_history  | table | ovirt_engine_history 

 public | host_interface_hourly_history | table | ovirt_engine_history 

 public | host_interface_samples_history| table | ovirt_engine_history 

 public | host_samples_history  | table | ovirt_engine_history 

 public | schema_version| table | ovirt_engine_history 

 public | statistics_vms_users_usage_daily  | table | ovirt_engine_history 

 public | statistics_vms_users_usage_hourly | table | ovirt_engine_history 

 public | storage_domain_configuration  | table | ovirt_engine_history 

 public | storage_domain_daily_history  | table | ovirt_engine_history 

 public | storage_domain_hourly_history | table | ovirt_engine_history 

 public | storage_domain_samples_history| table | ovirt_engine_history 

 public | tag_details   | table | ovirt_engine_history 

 public | tag_relations_history | table | ovirt_engine_history 

 public | users_details_history | table | ovirt_engine_history 

 public | vm_configuration  | table | ovirt_engine_history 

 public | vm_daily_history  | table | ovirt_engine_history 

 public | vm_device_history | table | ovirt_engine_history 

 public | vm_disk_configuration | table | ovirt_engine_history 

 public | vm_disk_daily_history | table | ovirt_engine_history 

 public | vm_disk_hourly_history| table | ovirt_engine_history 

 public | vm_disk_samples_history   | table | ovirt_engine_history 

 public | vm_disks_usage_daily_history  | table | ovirt_engine_history 

 public | vm_disks_usage_hourly_history | table | ovirt_engine_history 

 public | vm_disks_usage_samples_history| table | ovirt_engine_history 

 public | vm_hourly_history | table | ovirt_engine_history 

 public | vm_interface_configuration| table | ovirt_engine_history 

 public | vm_interface_daily_history| table | ovirt_engine_history 

 public | vm_interface_hourly_history   | table | ovirt_engine_history 

 public | vm_interface_samples_history  | table | ovirt_engine_history 

 public | vm_samples_history| table | ovirt_engine_history 

(40 rows) 

ovirt_engine_history=# 






Von:"Roy Golan"  
An:emanuel.santosvar...@mahle.com, 
Kopie:users@ovirt.org 
Datum:28.06.2018 17:37 
Betreff:Re: [ovirt-users] vacuumdb: could not connect to database 
ovirt_engine_history 






On Thu, 28 Jun 2018 at 18:06  wrote: 
..trying to update from 4.2.3 to 4.2.4 engine-setup fails with the 
following error: 

--snip 
20

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Oliver, can you share the output from lvs ?

On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Hi Yuval,
>
> * reinstallation failed, because LV already exists.
>   ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k
> <252,38g pool00  0,85
>   ovirt-node-ng-4.2.4-0.20180626.0+1   onn_ovn-monster Vwi-a-tz--
> <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
> See attachment imgbased.reinstall.log
>
> * I removed them and re-reinstall without luck.
>
> I got KeyError: 
>
> See attachment imgbased.rereinstall.log
>
> Also a new problem with nodectl info
> [root@ovn-monster tmp]# nodectl info
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
> in 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
> in CliApplication
> return cmdmap.command(args)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
> in command
> return self.commands[command](**kwargs)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
> in info
> Info(self.imgbased, self.machine).write()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
> __init__
> self._fetch_information()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
> _fetch_information
> self._get_layout()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
> _get_layout
> layout = LayoutParser(self.app.imgbase.layout()).parse()
>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155,
> in layout
> return self.naming.layout()
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
> in layout
> tree = self.tree(lvs)
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
> in tree
> bases[img.base.nvr].layers.append(img)
> KeyError: 
>
>
>
>
>
>
> Am 02.07.2018 um 22:22 schrieb Oliver Riesener <
> oliver.riese...@hs-bremen.de>:
>
> Hi Yuval,
>
> yes you are right, there was a unused and deactivated var_crash LV.
>
> * I activated and mount it to /var/crash via /etc/fstab.
> * /var/crash was empty, and LV has already ext4 fs.
>   var_crashonn_ovn-monster Vwi-aotz--   10,00g
> pool002,86
>
>
> * Now i will try to upgrade again.
>   * yum reinstall ovirt-node-ng-image-update.noarch
>
> BTW, no more imgbased.log files found.
>
> Am 02.07.2018 um 20:57 schrieb Yuval Turgeman :
>
> From your log:
>
> AssertionError: Path is already a volume: /var/crash
>
> Basically, it means that you already have an LV for /var/crash but it's
> not mounted for some reason, so either mount it (if the data good) or
> remove it and then reinstall the image-update rpm.  Before that, check that
> you dont have any other LVs in that same state - or you can post the output
> for lvs... btw, do you have any more imgbased.log files laying around ?
>
> You can find more details about this here:
>
> https://access.redhat.com/documentation/en-us/red_hat_
> virtualization/4.1/html/upgrade_guide/recovering_from_
> failed_nist-800_upgrade
>
> On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener  bremen.de> wrote:
>
>> Hi,
>>
>> i attached my /tmp/imgbased.log
>>
>> Sheers
>>
>> Oliver
>>
>>
>>
>> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman :
>>
>> Looks like the upgrade script failed - can you please attach
>> /var/log/imgbased.log or /tmp/imgbased.log ?
>>
>> Thanks,
>> Yuval.
>>
>> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
>> wrote:
>>
>>> Yuval, can you please have a look?
>>>
>>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener 
>>> :
>>>
 Yes, here is the same.

 It seams the bootloader isn’t configured right ?

 I did the Upgrade and reboot to 4.2.4 from UI and got:

 [root@ovn-monster ~]# nodectl info
 layers:
   ovirt-node-ng-4.2.4-0.20180626.0:
 ovirt-node-ng-4.2.4-0.20180626.0+1
   ovirt-node-ng-4.2.3.1-0.20180530.0:
 ovirt-node-ng-4.2.3.1-0.20180530.0+1
   ovirt-node-ng-4.2.3-0.20180524.0:
 ovirt-node-ng-4.2.3-0.20180524.0+1
   ovirt-node-ng-4.2.1.1-0.20180223.0:
 ovirt-node-ng-4.2.1.1-0.20180223.0+1
 bootloader:
   default: ovirt-node-ng-4.2.3-0.20180524.0+1
   entries:
 ovirt-node-ng-4.2.3-0.20180524.0+1:
   index: 0
   title: ovirt-node-ng-4.2.3-0.20180524.0
   kernel: /boot/ovirt-node-ng-4.2.3-0.20
 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
   args: "ro crashkernel=auto rd.lvm.lv=
 onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1 
 rd.lvm.lv=onn_ovn-monster/swap
 rd.md.uuid=c6c3013b:027a9346:

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Not sure this is the problem, autoextend should be enabled for the
thinpool, `lvs -o +profile` should show imgbased-pool (defined at
/etc/lvm/profile/imgbased-pool.profile)

On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David  wrote:

> On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen  wrote:
> >
> > This error adds some clarity.
> >
> > That said, I'm a bit unsure how the space can be the issue given I have
> several hundred GB of storage in the thin pool that's unused...
> >
> > How do you suggest I proceed?
> >
> > Thank you for your help,
> >
> > Matt
> >
> >
> >
> > [root@node6-g8-h4 ~]# lvs
> >
> >   LV   VG  Attr   LSize
>  Pool   Origin Data%  Meta%  Move Log Cpy%Sync
> Convert
> >   home onn_node1-g8-h4 Vwi-aotz--
>  1.00g pool004.79
> >   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
> <50.06g pool00 root
> >   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
> >   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
> <50.06g pool00
> >   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
> >   pool00   onn_node1-g8-h4 twi-aotz--
> <1.30t   76.63  50.34
>
> I think your thinpool meta volume is close to full and needs to be
> enlarged.
> This quite likely happened because you extended the thinpool without
> extending the meta vol.
>
> Check also 'lvs -a'.
>
> This might be enough, but check the names first:
>
> lvextend -L+200m onn_node1-g8-h4/pool00_tmeta
>
> Best regards,
>
> >   root onn_node1-g8-h4 Vwi---tz--
> <50.06g pool00
> >   tmp  onn_node1-g8-h4 Vwi-aotz--
>  1.00g pool005.04
> >   var  onn_node1-g8-h4 Vwi-aotz--
> 15.00g pool005.86
> >   var_crashonn_node1-g8-h4 Vwi---tz--
> 10.00g pool00
> >   var_local_images onn_node1-g8-h4 Vwi-aotz--
>  1.10t pool0089.72
> >   var_log  onn_node1-g8-h4 Vwi-aotz--
>  8.00g pool006.84
> >   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>  2.00g pool006.16
> > [root@node6-g8-h4 ~]# vgs
> >   VG  #PV #LV #SN Attr   VSize  VFree
> >   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
> >
> >
> > 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
> > 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> command='update', debug=True, experimental=False, format='liveimg',
> stream='Image')
> > 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
> 20180626.0.el7.squashfs.img'
> > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
> '-d', '--tmpdir', 'mnt.X'],) {}
> > 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
> > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> u'/tmp/mnt.1OhaU'],) {}
> > 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,157 [DEBUG] (MainThread) Returned:
> > 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Mounted squashfs
> > 2018-06-29 14:19:31,158 [DEBUG] (MainThread) Found fsimage at
> '/tmp/mnt.1OhaU/LiveOS/rootfs.img'
> > 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling binary: (['mktemp',
> '-d', '--tmpdir', 'mnt.X'],) {}
> > 2018-06-29 14:19:31,159 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Returned: /tmp/mnt.153do
> > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling binary: (['mount',
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {}
> > 2018-06-29 14:19:31,162 [DEBUG] (MainThread) Calling: (['mount',
> u'/tmp/mnt.1OhaU/LiveOS/rootfs.img', u'/tmp/mnt.153do'],) {'close_fds':
> True, 'stderr': -2}
> > 2018-06-29 14:19:31,177 [DEBUG] (MainThread) Returned:
> > 2018-06-29 14:19:31,189 [DEBUG] (MainThread) Using nvr:
> ovirt-node-ng-4.2.4-0.20180626.0
> > 2018-06-29 14:19:31,189 [DEBUG] (MainT

[ovirt-users] Re: ovirt-guest-agent-common version and repo

2018-07-03 Thread Michal Skrivanek


> On 3 Jul 2018, at 12:18, Gianluca Cecchi  wrote:
> 
> On Thu, Jun 28, 2018 at 11:19 AM, Gianluca Cecchi  > wrote:
> Hello,
> on CentOS 7.x VM with epel enabled it seems that latest package is 
> ovirt-guest-agent-common-1.0.14-1.el7.noarch and from its changelog it seems 
> far from being up 2 date and released before 4.2.0 release...
> 
> * Thu Nov 02 2017 Tomáš Golembiovský  > - 1.0.14-1
> - Bump to version 1.0.14
> - Changed link to upstream sources
> 
> Am I doing something wrong with it and its repo or is it not so important 
> component even after upgrading to oVirt 4.2.x?
> 
> Thanks,
> Gianluca
> 
> 
> 
> any comment on this?

AFAICT there were no changes other than packaging since Nov 2017, nothing 
functional.
The "product cycle" of guest agent is only loosely related to the rest of oVirt.
Main focus was on improving qemu-guest-agent capabilities upstream, actually.

Thanks,
michal

> 
> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LMDFRWS6LBDBOMWQZS7QZC3XYGOUBCIE/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F3CWGRVFY35IYEW5GXQUHP7RNOZ3NO5Y/


[ovirt-users] Re: Antwort: Re: vacuumdb: could not connect to database ovirt_engine_history

2018-07-03 Thread Shirly Radco
Hi,

Please share your history_configuration table content in ovirt_engine_history
db:

SELECT * FROM history_configuration;

the date and time you run this query

and the ovirt_engine_dwh.log





--

SHIRLY RADCO

BI SeNIOR SOFTWARE ENGINEER

Red Hat Israel 

TRIED. TESTED. TRUSTED. 

On Fri, Jun 29, 2018 at 9:06 AM,  wrote:

> Hi Roy, well db is alive :
>
>
> su - postgres -c 'scl enable rh-postgresql95 -- psql ovirt_engine_history'
> psql (9.5.9)
> Type "help" for help.
>
> ovirt_engine_history=# \dt
>  List of relations
>  Schema |   Name| Type  |Owner
> +---+---+---
> ---
>  public | calendar  | table | ovirt_engine_history
>  public | cluster_configuration | table | ovirt_engine_history
>  public | datacenter_configuration  | table | ovirt_engine_history
>  public | datacenter_storage_domain_map | table | ovirt_engine_history
>  public | enum_translator   | table | ovirt_engine_history
>  public | history_configuration | table | ovirt_engine_history
>  public | host_configuration| table | ovirt_engine_history
>  public | host_daily_history| table | ovirt_engine_history
>  public | host_hourly_history   | table | ovirt_engine_history
>  public | host_interface_configuration  | table | ovirt_engine_history
>  public | host_interface_daily_history  | table | ovirt_engine_history
>  public | host_interface_hourly_history | table | ovirt_engine_history
>  public | host_interface_samples_history| table | ovirt_engine_history
>  public | host_samples_history  | table | ovirt_engine_history
>  public | schema_version| table | ovirt_engine_history
>  public | statistics_vms_users_usage_daily  | table | ovirt_engine_history
>  public | statistics_vms_users_usage_hourly | table | ovirt_engine_history
>  public | storage_domain_configuration  | table | ovirt_engine_history
>  public | storage_domain_daily_history  | table | ovirt_engine_history
>  public | storage_domain_hourly_history | table | ovirt_engine_history
>  public | storage_domain_samples_history| table | ovirt_engine_history
>  public | tag_details   | table | ovirt_engine_history
>  public | tag_relations_history | table | ovirt_engine_history
>  public | users_details_history | table | ovirt_engine_history
>  public | vm_configuration  | table | ovirt_engine_history
>  public | vm_daily_history  | table | ovirt_engine_history
>  public | vm_device_history | table | ovirt_engine_history
>  public | vm_disk_configuration | table | ovirt_engine_history
>  public | vm_disk_daily_history | table | ovirt_engine_history
>  public | vm_disk_hourly_history| table | ovirt_engine_history
>  public | vm_disk_samples_history   | table | ovirt_engine_history
>  public | vm_disks_usage_daily_history  | table | ovirt_engine_history
>  public | vm_disks_usage_hourly_history | table | ovirt_engine_history
>  public | vm_disks_usage_samples_history| table | ovirt_engine_history
>  public | vm_hourly_history | table | ovirt_engine_history
>  public | vm_interface_configuration| table | ovirt_engine_history
>  public | vm_interface_daily_history| table | ovirt_engine_history
>  public | vm_interface_hourly_history   | table | ovirt_engine_history
>  public | vm_interface_samples_history  | table | ovirt_engine_history
>  public | vm_samples_history| table | ovirt_engine_history
> (40 rows)
>
> ovirt_engine_history=#
>
>
>
>
>
>
> Von:"Roy Golan" 
> An:emanuel.santosvar...@mahle.com,
> Kopie:users@ovirt.org
> Datum:28.06.2018 17:37
> Betreff:Re: [ovirt-users] vacuumdb: could not connect to database
> ovirt_engine_history
> --
>
>
>
>
>
> On Thu, 28 Jun 2018 at 18:06 <*emanuel.santosvar...@mahle.com*
> > wrote:
> ..trying to update from 4.2.3 to 4.2.4 engine-setup fails with the
> following error:
>
> --snip
> 2018-06-28 16:26:45,507+0200 DEBUG otopi.plugins.ovirt_engine_
> setup.ovirt_engine_dwh.db.vacuum plugin.execute:926 execute-output:
> ['/usr/share/ovirt-engine-dwh/bin/dwh-vacuum.sh', '-f', '-v'] stderr:
> vacuumdb: could not connect to database ovirt_engine_history: FATAL:
>  password authentication failed for user "ovirt_engine_history"
>
> do you have dwh installed? if no just skip this vacuum part.
> you can check the db connection from cli using
> $ su - postgres -c 'scl enable rh-postgresql95 -- psql
> ovirt-engine-history'
>
> 2018-06-28 16:26:45,507+0200 DEBUG otopi.context
> context._executeMethod:143 method exce

[ovirt-users] Re: ovirt-guest-agent-common version and repo

2018-07-03 Thread Gianluca Cecchi
On Thu, Jun 28, 2018 at 11:19 AM, Gianluca Cecchi  wrote:

> Hello,
> on CentOS 7.x VM with epel enabled it seems that latest package
> is ovirt-guest-agent-common-1.0.14-1.el7.noarch and from its changelog it
> seems far from being up 2 date and released before 4.2.0 release...
>
> * Thu Nov 02 2017 Tomáš Golembiovský  - 1.0.14-1
> - Bump to version 1.0.14
> - Changed link to upstream sources
>
> Am I doing something wrong with it and its repo or is it not so important
> component even after upgrading to oVirt 4.2.x?
>
> Thanks,
> Gianluca
>
>
>
any comment on this?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LMDFRWS6LBDBOMWQZS7QZC3XYGOUBCIE/


[ovirt-users] Re: Ovirt and L2 Gateway

2018-07-03 Thread Marcin Mirecki
Hi Carl,

Glad to hear it helped, and thanks for the description.
May I ask why you want to channel the traffic through
one host?
This solution has a disadvantage of pushing all outfgoing
traffic from the OVN network through a single host, which
is not quite optimal for performance. It would be interesting
for us to know the use case for this.

Thanks,
Marcin


On Sun, Jul 1, 2018 at 6:27 PM,  wrote:

> Hi Marcin.
>
> Thank you for the hint. I have now got the l2gateway functionality working
> as I hoped for.
>
> To sum up the exact steps taken (I am running the new oVirt v. 4.2.4):
>
> 1. In oVirt's web-management interface add the needed "physical network"
> network (by which I mean a network created without clicking the "Create on
> External Provider" check box). When creating the "physical network" click
> "Enable VLAN tagging" and specify the right VLAN ID if this is relevant. In
> the following the name of this newly created "physical network" is referred
> to by the variable $physnet and the VLAN ID is referred to by the variable
> $tag.
>
> 2. Notice that an extra OVN network named "external_$physnet" is
> automatically created by oVirt v. 4.2.4. This _might_ be important and I
> think that you _might_ have to create a similar network yourself if using
> older oVirt versions. Then you would have to create a similar OVN network
> manually and remember to click the "Create on External Provider" check box,
> click the "Connect to Data Center Network" and select the "physical
> network" ($physnet) you created in step 1.
>
> 3. Add the newly created "physical network" ($physnet) to the physical
> interface on the physical host which you want to become your future L2
> Gateway. Do this by clicking the host, selecting "Network Interfaces" and
> clicking the "Setup Host Networks" button. In the window opened drag-drop
> the "physical network" ($physnet) icon onto the box containing the name of
> the relevant physical interface of the host.
>
> 4. In oVirt create a pure OVN overlay network (by clicking the "Create on
> External Provider" check box) which will be used for communication by all
> VM's needing access to the physical network - no matter which host they are
> running on and no matter if the host has a direct physical interface to the
> "physical network" ($physnet) or not. In the following the name of this
> newly created OVN overlay network will referred to by the variable $ovn.
>
> 5. Enter this command on the oVirt engine server to find the chassis UUID
> of the future L2 Gateway host:
> # ovn-sbctl show
>
> Which creates output similar to this:
>
> Chassis "16a1d7e4-70f6-4683-8ad6-77fe7fa6d03f"
> hostname: "kvm1.ovirt.local"
> Encap geneve
> ip: "10.100.0.11"
> options: {csum="true"}
> Chassis "2801ee0b-46c4-4c23-aafc-85804afdff54"
> hostname: "kvm2.ovirt.local"
> Encap geneve
> ip: "10.100.0.12"
> options: {csum="true"}
> Chassis "e732b833-200c-45bb-b55f-25c0f2ab504e"
> hostname: "kvm3.ovirt.local"
> Encap geneve
> ip: "10.100.0.13"
> options: {csum="true"}
>
> Notice the Chassis UUID for the oVirt host which you want to become your
> L2 Gateway: If you e.g. want kvm3.ovirt.local to become your future L2
> Gateway then the chassis UUID in the above example would be
> "e732b833-200c-45bb-b55f-25c0f2ab504e". In the following the correct
> chassis UUID will be referred to by the variable $chassisUUID.
>
> 6. Enter these commands on the oVirt engine server to create a L2 Gateway
> with a name contained in the variable $l2gw (the name is not important but
> you might want to select something meaningful like "l2gw_$physnet"):
> # ovn-nbctl lsp-add $ovn $l2gw "" $tag
> # ovn-nbctl lsp-set-addresses $l2gw unknown
> # ovn-nbctl lsp-set-type $l2gw l2gateway
> # ovn-nbctl lsp-set-options $l2gw network_name=$physnet
> l2gateway-chassis=$chassisUUID
>
> Here you need to be extra careful because the OVN developers have been a
> little sloppy while naming different option keys: The network name uses an
> UNDERSCORE so it is called "network_name" whereas the L2 Gateway chassis
> uses a HYPHEN so it is called "l2gateway-chassis". If you get this wrong
> you can spend quite some time debugging - trust me!!!
>
> That's it. oVirt takes care of the rest :-)
>
> Best regards,
>
> Carl
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/HAHNME4UAG4GI2G54RZSUXGO632Q6ALT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/commun

[ovirt-users] Re: Dedicated underlay network for overlay traffic

2018-07-03 Thread Marcin Mirecki
The OVN tunnels are set up during host installation, when
only 'ovirtmgmt' is available (no other networks are created
yet).
You can change the tunneling network for a cluster by using
the procedure described below. This will hopefully be integrated
into the UI one day.

1. Go to:
cd /use/share/ovirt-engine/playbooks

2. Execute:
ansible-playbook --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa -i
/usr/share/ovirt-engine-metrics/bin/ovirt-engine-hosts-ansible-inventory
--extra-vars
" cluster_name= ovn_central=
ovn_tunneling_interface=" ovirt-provider-ovn-driver.yml

Note that this only changes the settings on existing hosts.
If new hosts are added to the cluster, the procedure has to be repeated.

The OVN tunnel network can also be changed on an individual host by
invoking:
vdsm-tool ovn-config  

Marcin


On Sun, Jul 1, 2018 at 7:03 PM,  wrote:

> I am going to be using OVN Geneve overlay networks extensively and I
> expect a lot of traffic on the underlay network being used for transmission
> of the tunnel traffic.
>
> In oVirt the default seems to be that the network "ovirtmgmt" is being
> used for the underlay network - which could cause problems for management
> traffic if vms are saturating the link with traffic on different OVN
> overlay networks.
>
> When selecting a specific cluster, selecting "Logical Networks" and
> pressing the "Manage Networks" button it is possible to specify that a
> specific Data Center Network shall be limited to one or more of the
> following traffic types:
> - VM Network
> - Management
> - Display Network
> - Migration Network
> - Gluster Network
> - Default Route
>
> Here I miss an option called "Underlay Network for OVN Geneve traffic" or
> similar.
>
> Is there a way - e.g. by editing some configuration files on the oVirt
> engine and on the oVirt nodes - to divert all overlay Geneve traffic away
> from the management interface unto a dedicated network interface?
>
> Carl
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/7MX6GKDKNQ7GCIWPEEMH374YIJ3JLDHF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GHLEPKJ76DABPOZCINN6YSEKAETHHH4G/