[ovirt-users] Disk move succeed but didn't move content

2019-11-28 Thread Juan Pablo Lorier

Hi,

I've a fresh new install of ovirt 4.3 and tried to import an gluster 
vmstore. I managed to import via NFS the former data domain. The problem 
is that when I moved the disks of the vms to the new ISCSI data domain, 
I got a warning that sparse disk type will be converted to qcow2 disks, 
and after accepting, the disks were moved with no error.


The problem is that the disks now figure as <1Gb size instead of the 
original size and thus, the vms fail to start.


Is there any way to recover those disks? I have no backup of the vms :-(

Regards
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YKK2HIGPFJUZBS5KQHIIWCP5OGC3ZYVY/


[ovirt-users] Re: AWX and error using ovirt as an inventory source

2019-11-28 Thread Nathanaël Blanchet
give a try to [ https://awx.wiki/ | https://awx.wiki/ ] based on rpms and not 
docker if you want to test this feature, it rocks. 


De: "Gianluca Cecchi"  
À: "Nathanaël Blanchet"  
Cc: "users"  
Envoyé: Jeudi 28 Novembre 2019 19:17:12 
Objet: Re: [ovirt-users] AWX and error using ovirt as an inventory source 

On Thu, Nov 28, 2019 at 5:33 PM Nathanaël Blanchet < [ mailto:blanc...@abes.fr 
| blanc...@abes.fr ] > wrote: 






Le 28/11/2019 à 17:15, Gianluca Cecchi a écrit : 

BQ_BEGIN

On Thu, Nov 28, 2019 at 4:59 PM Nathanaël Blanchet < [ mailto:blanc...@abes.fr 
| blanc...@abes.fr ] > wrote: 

BQ_BEGIN



Hello gianluca, 

I reported this issue a long time ago (march of 19) in an unofficial rpm awx 
project 


[ https://github.com/MrMEEE/awx-build/issues/72 | 
https://github.com/MrMEEE/awx-build/issues/72 ] 




But I see that it is marked as closed 

BQ_END
It's marked as closed for the unofficial RPM project only, not with the regular 
container deployment. 

BQ_BEGIN


BQ_BEGIN



All related RHV/ovirt stuff (not only dynamic inventory, but all ovirt* ansible 
module) fail because of the version of pycurl (worked before 7.19) 

BQ_END

You mean inside awx container, correct? 
See below my comments, as I know almost nothing about venv concepts... sorry 

BQ_BEGIN



What you need to do is create a py2.x venv then recompile latest pycurl with 
nss support like this: 

* # /opt/rh/rh-python36/root/usr/bin/awx-create-venv (-e 
/var/lib/awx/venv/) -n ovirt 

BQ_END

I have to run this inside awx container as user root? Can you explain the 
syntax? I get error because of the parenthesis... 

BQ_END


sure inside the container, parenthesis are the default parameters so you can 
use an other path: 

/opt/rh/rh-python36/root/usr/bin/awx-create-venv -n ovirt -p 2 

BQ_BEGIN



BQ_END


BQ_END

In my awx container I don't have 
/opt/rh/rh-python36/root/usr/bin/awx-create-venv and don't have awx-create-venv 
at all in any path. 
Not in container based on image ansible/awx_task:9.0.1 nor in container based 
on image ansible/awx_web:9.0.1 

Gianluca 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4MSEW7LVC4QOTANHO7ZVTBSJ7V2DQP4U/


[ovirt-users] Re: AWX and error using ovirt as an inventory source

2019-11-28 Thread Gianluca Cecchi
On Thu, Nov 28, 2019 at 5:33 PM Nathanaël Blanchet  wrote:

>
> Le 28/11/2019 à 17:15, Gianluca Cecchi a écrit :
>
> On Thu, Nov 28, 2019 at 4:59 PM Nathanaël Blanchet 
> wrote:
>
>> Hello gianluca,
>>
>> I reported this issue a long time ago (march of 19) in an unofficial rpm
>> awx project
>>
>> https://github.com/MrMEEE/awx-build/issues/72
>>
>
> But I see that it is marked as closed
>
> It's marked as closed for the unofficial RPM project only, not with the
> regular container deployment.
>
> All related RHV/ovirt stuff (not only dynamic inventory, but all ovirt*
>> ansible module) fail because of the version of pycurl (worked before 7.19)
>>
> You mean inside awx container, correct?
> See below my comments, as I know almost nothing about venv concepts...
> sorry
>
>> What you need to do is create a py2.x venv then recompile latest pycurl
>> with nss support like this:
>>
>>- # /opt/rh/rh-python36/root/usr/bin/awx-create-venv (-e
>>/var/lib/awx/venv/) -n ovirt
>>
>> I have to run this inside awx container as user root? Can you explain the
> syntax? I get error because of the parenthesis...
>
> sure inside the container, parenthesis are the default parameters so you
> can use an other path:
>
> /opt/rh/rh-python36/root/usr/bin/awx-create-venv -n ovirt -p 2
>
>
> In my awx container I don't
have  /opt/rh/rh-python36/root/usr/bin/awx-create-venv and don't have
awx-create-venv at all in any path.
Not in container based on image ansible/awx_task:9.0.1 nor in container
based on image ansible/awx_web:9.0.1

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A24WM64RL2P4RVTIVK2IFHPNCHAG7G2T/


[ovirt-users] Re: hyperconverged single node with SSD cache fails gluster creation

2019-11-28 Thread Thomas Hoberg

Hi URS,

I have tried again using the latest release (4.3.7) and noted that now 
the more "explicit" variant you quote was generated.


The behavior is changed, but it still fails now complaining about 
/dev/sdb being mounted (or inaccessible in any other way).


I am attaching the logs.

I have a HDD RAID on /dev/sdb and a SSD partiton on /dev/sda3 with 
>600GB of space left.


I have mostly gone with defaults everywhere, used an arbiter (at least 
for the vmstore and data volumes) VDO and write-through caching with 
550GB size (note that it fails to apply that value beyond the first node).


Has anyone else tried a hyperconverged 3-node with SSD caching with 
success recently?


Thanks for your feedback and help so far,

Thomas



gluster-deployment.log.gz
Description: GNU Zip compressed data
<>___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SR6QYSJHMYY6JRTPC3TGH24NXYY62TZM/


[ovirt-users] Re: hyperconverged single node with SSD cache fails gluster creation

2019-11-28 Thread thomas
(could become a double post because I am using e-mail to attach the logs..)

Hi URS,

I have tried again using the latest release (4.3.7) and noted that now the more 
"explicit" variant you quote was generated.

The behavior is changed, but it still fails now complaining about /dev/sdb 
being mounted (or inaccessible in any other way).

I am attaching the logs.

I have a HDD RAID on /dev/sdb and a SSD partiton on /dev/sda3 with >600GB of 
space left.

I have mostly gone with defaults everywhere, used an arbiter (at least for the 
vmstore and data volumes) VDO and write-through caching with 550GB size (note 
that it fails to apply that value beyond the first node).

Has anyone else tried a hyperconverged 3-node with SSD caching with success 
recently?

Thanks for your feedback and help so far,

Thomas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K3UYGNAHI4PTIJX6A2EMNT62JEWRHMRE/


[ovirt-users] Re: Cannot activate/deactivate storage domain

2019-11-28 Thread Albl, Oliver
Hi,

  any ideas if or how I can recover the storage domain? I will need to destroy 
it, as the ongoing scsi scans are becoming an impediment.

Thank you and all the best,
Oliver

-Ursprüngliche Nachricht-
Von: Oliver Albl 
Gesendet: Dienstag, 5. November 2019 11:20
An: users@ovirt.org
Betreff: [ovirt-users] Re: Cannot activate/deactivate storage domain

> On Mon, Nov 4, 2019 at 9:18 PM Albl, Oliver  wrote:
>
> What was the last change in the system? upgrade? network change? storage
> change?
>

Last change was four weeks ago ovirt upgrade from 4.3.3 to 4.3.6.7 (including
CentOS hosts to 7.7 1908)

>
> This is expected if some domain is not accessible on all hosts.
>
>
> This means sanlock timed out renewing the lockspace
>
>
> If a host cannot access all storage domain in the DC, the system set
> it to non-operational, and will probably try to reconnect it later.
>
>
> This means reading 4k from start of the metadata lv took 9.6 seconds.
> Something in
> the way to storage is bad (kernel, network, storage).
>
>
> We 20 seconds (4 retires, 5 seconds per retry) gracetime in multipath
> when there are no active paths, before I/O fails, pausing the VM. We
> also resume paused VMs when storage monitoring works again, so maybe
> the VM were paused and resumed.
>
> However for storage monitoring we have strict 10 seconds timeout. If
> reading from the metadata lv times out or fail and does not operated
> normally after
> 5 minutes, the
> domain will become inactive.
>
>
> This can explain the read timeouts.
>
>
> This looks the right way to troubleshoot this.
>
>
> We need vdsm logs to understand this failure.
>
>
> This does not mean OVF is corrupted, only that we could not store new
> data. The older data on the other OVFSTORE disk is probably fine.
> Hopefuly the system will not try to write to the other OVFSTORE disk
> overwriting the last good version.
>
>
> This is normal, the first 2048 bytes are always zeroes. This area was
> used for domain metadata in older versions.
>
>
> Please share more details:
>
> - output of "lsblk"
> - output of "multipath -ll"
> - output of "/usr/libexec/vdsm/fc-scan -v"
> - output of "vgs -o +tags problem-domain-id"
> - output of "lvs -o +tags problem-domain-id"
> - contents of /etc/multipath.conf
> - contents of /etc/multipath.conf.d/*.conf
> - /var/log/messages since the issue started
> - /var/log/vdsm/vdsm.log* since the issue started on one of the hosts
>
> A bug is probably the best place to keep these logs and make it easy to
> trac.

Please see https://bugzilla.redhat.com/show_bug.cgi?id=1768821

>
> Thanks,
> Nir

Thank you!
Oliver
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZ5ZN2S7N54JYVV3RWOYOHTEAWFQ23Q7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZF2JJRFTP43XZNLFYXQIAOJKVDGYKAHL/


[ovirt-users] Re: Cannot activate/deactivate storage domain

2019-11-28 Thread Albl, Oliver


smime.p7m
Description: S/MIME encrypted message
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GSBKAWBDS432YCFCI76AOE4TKZDK72F6/


[ovirt-users] Re: AWX and error using ovirt as an inventory source

2019-11-28 Thread Nathanaël Blanchet


Le 28/11/2019 à 17:15, Gianluca Cecchi a écrit :
On Thu, Nov 28, 2019 at 4:59 PM Nathanaël Blanchet > wrote:


Hello gianluca,

I reported this issue a long time ago (march of 19) in an
unofficial rpm awx project

https://github.com/MrMEEE/awx-build/issues/72


But I see that it is marked as closed
It's marked as closed for the unofficial RPM project only, not with the 
regular container deployment.


All related RHV/ovirt stuff (not only dynamic inventory, but all
ovirt* ansible module) fail because of the version of pycurl
(worked before 7.19)

You mean inside awx container, correct?
See below my comments, as I know almost nothing about venv concepts... 
sorry


What you need to do is create a py2.x venv then recompile latest
pycurl with nss support like this:

  * # /opt/rh/rh-python36/root/usr/bin/awx-create-venv (-e
/var/lib/awx/venv/) -n ovirt

I have to run this inside awx container as user root? Can you explain 
the syntax? I get error because of the parenthesis...


sure inside the container, parenthesis are the default parameters so you 
can use an other path:


/opt/rh/rh-python36/root/usr/bin/awx-create-venv -n ovirt -p 2


  * source /var/lib/awx/venv/ovirt/bin/activate

as root correct? and I should source the just created venv, correct?

yes


  * # (ovirt) export PYCURL_SSL_LIBRARY=nss; pip install pycurl
--compile --no-cache-dir

I presume (ovirt) is a sort of prompt of the venv

yes
Will the settings preserved across reboot of the server hosting the 
container?

the setting is inside the container


  * choose this venv instead of the regular in your inventory page
and you'll be able to sync

I don't see in awx an option to specify a venv or another...


You may need to create a second venv so as the drop down menu to be present.



PS: Something else that may help, try to hack the ovirt4.py with
ansible_host if you want to call the hosts into playbook by the
hostname and not the first IP:

vi

/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/awx/plugins/inventory/ovirt4.py

'affinity_labels': [label.name  forlabelinlabels],
## ajout NBT pour obtenir le nom de l'hote a la place de l ip
'ansible_host': vm.name ,
'affinity_groups': [


I will investigate, thanks.
Gianluca


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/THLWR2HQWZJAM336DTN6MLKBO26ZEJKZ/


[ovirt-users] Re: AWX and error using ovirt as an inventory source

2019-11-28 Thread Nathanaël Blanchet


Le 28/11/2019 à 16:59, Nathanaël Blanchet a écrit :


Hello gianluca,

I reported this issue a long time ago (march of 19) in an unofficial 
rpm awx project


https://github.com/MrMEEE/awx-build/issues/72

All related RHV/ovirt stuff (not only dynamic inventory, but all 
ovirt* ansible module) fail because of the version of pycurl (worked 
before 7.19)


What you need to do is create a py2.x venv then recompile latest 
pycurl with nss support like this:


  * # /opt/rh/rh-python36/root/usr/bin/awx-create-venv (-e
/var/lib/awx/venv/) -n ovirt


I forgot to precise python2 (some ovirt ansible module fail with py3)

# /opt/rh/rh-python36/root/usr/bin/awx-create-venv (-e 
/var/lib/awx/venv/) -n ovirt -p 2


  * source /var/lib/awx/venv/ovirt/bin/activate
  * # (ovirt) export PYCURL_SSL_LIBRARY=nss; pip install pycurl
--compile --no-cache-dir


 * # (ovirt) pip install ovirt-engine-sdk-python ansible psutil
   python-memcached


  * choose this venv instead of the regular in your inventory page and
you'll be able to sync

PS: Something else that may help, try to hack the ovirt4.py with 
ansible_host if you want to call the hosts into playbook by the 
hostname and not the first IP:


vi 
/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/awx/plugins/inventory/ovirt4.py


'affinity_labels': [label.name forlabelinlabels],
## ajout NBT pour obtenir le nom de l'hote a la place de l ip
'ansible_host': vm.name,
'affinity_groups': [


Le 28/11/2019 à 15:51, Gianluca Cecchi a écrit :

Hello,
I have awx 9.0.1 and ansible 2.8.5 in container of a CentOS 7.7 server.
I'm trying to use oVirt 4.3.6.7-1.el7 as a source of an inventory in 
awx but I get error when syncing


Find at bottom below the error messages.
I see that in recent past (around June this year) there were some 
problems, but they should be solved now, correct?
There was also a problem in syncing when some powered off VMs were 
present in oVirt env, but I think this solved too, correct?


Any way to replicate / test from command line of awx container?
I try some things but in command line I always get error regarding

oVirt inventory script requires ovirt-engine-sdk-python >= 4.0.0

that I think depends on not using correct command line and/or not 
setting needed env.


Thanks in advance,
Gianluca

    2.536 INFO     Updating inventory 4: MYDC_OVIRT
    3.011 INFO     Reading Ansible inventory source: 
/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/plugins/inventory/ovirt4.py

    3.013 INFO     Using VIRTUAL_ENV: /var/lib/awx/venv/ansible
    3.013 INFO     Using PATH: 
/var/lib/awx/venv/ansible/bin:/var/lib/awx/venv/awx/bin:/var/lib/awx/venv/awx/bin:/var/lib/awx/venv/awx/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    3.013 INFO     Using PYTHONPATH: 
/var/lib/awx/venv/ansible/lib/python3.6/site-packages:

Traceback (most recent call last):
  File "/var/lib/awx/venv/awx/bin/awx-manage", line 11, in 
    load_entry_point('awx==9.0.1.0', 'console_scripts', 'awx-manage')()
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/__init__.py", 
line 158, in manage

    execute_from_command_line(sys.argv)
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py", 
line 381, in execute_from_command_line

    utility.execute()
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py", 
line 375, in execute

    self.fetch_command(subcommand).run_from_argv(self.argv)
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py", 
line 323, in run_from_argv

    self.execute(*args, **cmd_options)
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py", 
line 364, in execute

    output = self.handle(*args, **options)
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py", 
line 1153, in handle

    raise exc
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py", 
line 1043, in handle

    venv_path=venv_path, verbosity=self.verbosity).load()
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py", 
line 214, in load

    return self.command_to_json(base_args + ['--list'])
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py", 
line 197, in command_to_json

    self.method, proc.returncode, stdout, stderr))
RuntimeError: ansible-inventory failed (rc=1) with stdout:
stderr:
ansible-inventory 2.8.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = 
['/var/lib/awx/.ansible/plugins/modules', 
'/usr/share/ansible/plugins/modules']
  ansible python module location = 
/usr/lib/python3.6/site-packages/ansible

  executable location = /usr/bin/ansible-inventory
  python version = 3.6.8 (default, Oct  7 2019, 17:58:22) [GCC 8.2.1 
20

[ovirt-users] Re: AWX and error using ovirt as an inventory source

2019-11-28 Thread Gianluca Cecchi
On Thu, Nov 28, 2019 at 4:59 PM Nathanaël Blanchet  wrote:

> Hello gianluca,
>
> I reported this issue a long time ago (march of 19) in an unofficial rpm
> awx project
>
> https://github.com/MrMEEE/awx-build/issues/72
>

But I see that it is marked as closed

> All related RHV/ovirt stuff (not only dynamic inventory, but all ovirt*
> ansible module) fail because of the version of pycurl (worked before 7.19)
>
You mean inside awx container, correct?
See below my comments, as I know almost nothing about venv concepts... sorry

> What you need to do is create a py2.x venv then recompile latest pycurl
> with nss support like this:
>
>- # /opt/rh/rh-python36/root/usr/bin/awx-create-venv (-e
>/var/lib/awx/venv/) -n ovirt
>
> I have to run this inside awx container as user root? Can you explain the
syntax? I get error because of the parenthesis...


>
>- source /var/lib/awx/venv/ovirt/bin/activate
>
> as root correct? and I should source the just created venv, correct?

>
>- # (ovirt) export PYCURL_SSL_LIBRARY=nss; pip install pycurl
>--compile --no-cache-dir
>
> I presume (ovirt) is a sort of prompt of the venv
Will the settings preserved across reboot of the server hosting the
container?

>
>- choose this venv instead of the regular in your inventory page and
>you'll be able to sync
>
> I don't see in awx an option to specify a venv or another...

> PS: Something else that may help, try to hack the ovirt4.py with
> ansible_host if you want to call the hosts into playbook by the hostname
> and not the first IP:
>
> vi
> /opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/awx/plugins/inventory/ovirt4.py
> 'affinity_labels': [label.name for label in labels],
> ## ajout NBT pour obtenir le nom de l'hote a la place de l ip
> 'ansible_host': vm.name,
> 'affinity_groups': [
>
>
> I will investigate, thanks.
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5K5PIGNREQJQ5O54FLSTW7O2DS5LF2NR/


[ovirt-users] Re: Migrate VM from oVirt to oVirt

2019-11-28 Thread adrianquintero
Thank you Luca, this process worked for me, just wondering why I could not 
achieve this by generating an ova.

Thanks again.

Adrian.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAU2OC6PQ3YLJXOVDH2WQTZ6DOZBC5YN/


[ovirt-users] Re: HostedEngine Deployment fails on AMD EPYC 7402P 4.3.7

2019-11-28 Thread Strahil
The HostedEngine does it automatically, but the option that defines the OVF 
refresh interval is not accurate.
Just power it up without that option (cluster in maintenance) and keep it 
running for a day.
On the next day power it off and try to power it up via hosted-engine 
--vm-start.

Best Regards,
Strahil NikolovOn Nov 28, 2019 11:54, Ralf Schenk  wrote:
>
> Hello,
>
> I did something like that via "virsh edit HostedEngine". 
>
> But how is the change written back to the shared storage ("hosted_storage" so 
> it stays permanent  for HA Engine ? 
>
> I was able to boot up HostedEngine manually via virsh start after removing 
> the required flag from XML (I first added a user to sasldb in 
> /etc/libvirt/passwd.db to be able to log into libvirt).
>
> Bye
>
>
> Am 28.11.2019 um 05:51 schrieb Strahil:
>
> Hi Ralf,
> When the deployment fail - you can dump the xml from virsh , edit it, 
> undefine the current HostedEngine and define your modified HostedEngine 's  
> xml.
>
> Once you do that, you can try to start the
> VM.
>
> Good luck.
>
> Best Regards,
> Strahil Nikolov
>
> On Nov 27, 2019 18:28, Ralf Schenk  wrote:
>>
>> Hello,
>>
>> This week I tried to deploy Hosted Engine on Ovirt-Node-NG 4.3.7 based Host.
>>
>> At the time the locally deployed Engine ist copied to hosted-storage (in my 
>> case NFS) and deployment tries to start the Engine (via ovirt-ha-agent) this 
>> fails.
>>
>> QUEMU Log (/var/log/libvirt/qemu/HostedEngine.log) only shows "2019-11-27 
>> 16:17:16.833+: shutting down, reason=failed". 
>>
>> Researching the cause is: The built Libvirt VM XML includes the feature 
>> "virt-ssbd" as requirement, which is simly not there.
>>
>> From VM XML:
>>
>>   
>>     EPYC
>>     
>>     
>>     
>>
>> from cat /proc/cpuinfo:
>>
>> processor   : 47
>> vendor_id   : AuthenticAMD
>> cpu family  : 23
>> model   : 49
>> model name  : AMD EPYC 7402P 24-Core Processor
>> stepping    : 0
>> microcode   : 0x830101c
>> cpu MHz : 2800.000
>> cache size  : 512 KB
>> physical id : 0
>> siblings    : 48
>> core id : 30
>> cpu cores   : 24
>> api___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T4VOONMQ5QWITWTLYPOPJA4FZIGTFEKY/


[ovirt-users] Re: AWX and error using ovirt as an inventory source

2019-11-28 Thread Nathanaël Blanchet

Hello gianluca,

I reported this issue a long time ago (march of 19) in an unofficial rpm 
awx project


https://github.com/MrMEEE/awx-build/issues/72

All related RHV/ovirt stuff (not only dynamic inventory, but all ovirt* 
ansible module) fail because of the version of pycurl (worked before 7.19)


What you need to do is create a py2.x venv then recompile latest pycurl 
with nss support like this:


 * # /opt/rh/rh-python36/root/usr/bin/awx-create-venv (-e
   /var/lib/awx/venv/) -n ovirt
 * source /var/lib/awx/venv/ovirt/bin/activate
 * # (ovirt) export PYCURL_SSL_LIBRARY=nss; pip install pycurl
   --compile --no-cache-dir
 * choose this venv instead of the regular in your inventory page and
   you'll be able to sync

PS: Something else that may help, try to hack the ovirt4.py with 
ansible_host if you want to call the hosts into playbook by the hostname 
and not the first IP:


vi 
/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/awx/plugins/inventory/ovirt4.py


'affinity_labels': [label.name forlabelinlabels],
## ajout NBT pour obtenir le nom de l'hote a la place de l ip
'ansible_host': vm.name,
'affinity_groups': [


Le 28/11/2019 à 15:51, Gianluca Cecchi a écrit :

Hello,
I have awx 9.0.1 and ansible 2.8.5 in container of a CentOS 7.7 server.
I'm trying to use oVirt 4.3.6.7-1.el7 as a source of an inventory in 
awx but I get error when syncing


Find at bottom below the error messages.
I see that in recent past (around June this year) there were some 
problems, but they should be solved now, correct?
There was also a problem in syncing when some powered off VMs were 
present in oVirt env, but I think this solved too, correct?


Any way to replicate / test from command line of awx container?
I try some things but in command line I always get error regarding

oVirt inventory script requires ovirt-engine-sdk-python >= 4.0.0

that I think depends on not using correct command line and/or not 
setting needed env.


Thanks in advance,
Gianluca

    2.536 INFO     Updating inventory 4: MYDC_OVIRT
    3.011 INFO     Reading Ansible inventory source: 
/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/plugins/inventory/ovirt4.py

    3.013 INFO     Using VIRTUAL_ENV: /var/lib/awx/venv/ansible
    3.013 INFO     Using PATH: 
/var/lib/awx/venv/ansible/bin:/var/lib/awx/venv/awx/bin:/var/lib/awx/venv/awx/bin:/var/lib/awx/venv/awx/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    3.013 INFO     Using PYTHONPATH: 
/var/lib/awx/venv/ansible/lib/python3.6/site-packages:

Traceback (most recent call last):
  File "/var/lib/awx/venv/awx/bin/awx-manage", line 11, in 
    load_entry_point('awx==9.0.1.0', 'console_scripts', 'awx-manage')()
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/__init__.py", 
line 158, in manage

    execute_from_command_line(sys.argv)
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py", 
line 381, in execute_from_command_line

    utility.execute()
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py", 
line 375, in execute

    self.fetch_command(subcommand).run_from_argv(self.argv)
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py", 
line 323, in run_from_argv

    self.execute(*args, **cmd_options)
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py", 
line 364, in execute

    output = self.handle(*args, **options)
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py", 
line 1153, in handle

    raise exc
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py", 
line 1043, in handle

    venv_path=venv_path, verbosity=self.verbosity).load()
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py", 
line 214, in load

    return self.command_to_json(base_args + ['--list'])
  File 
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py", 
line 197, in command_to_json

    self.method, proc.returncode, stdout, stderr))
RuntimeError: ansible-inventory failed (rc=1) with stdout:
stderr:
ansible-inventory 2.8.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = 
['/var/lib/awx/.ansible/plugins/modules', 
'/usr/share/ansible/plugins/modules']
  ansible python module location = 
/usr/lib/python3.6/site-packages/ansible

  executable location = /usr/bin/ansible-inventory
  python version = 3.6.8 (default, Oct  7 2019, 17:58:22) [GCC 8.2.1 
20180905 (Red Hat 8.2.1-3)]

Using /etc/ansible/ansible.cfg as config file
 [WARNING]:  * Failed to parse /var/lib/awx/venv/awx/lib64/python3.6/site-
packages/awx/plugins/inventory/ovirt4.py with script plugin: Inventory 
script

(/var/lib/awx/venv/awx/lib64/python3.6/site-
packages/awx/plugins/inventory/ovirt4.py) h

[ovirt-users] Re: spice connection error

2019-11-28 Thread Strahil
Check on host 'node3' in the Network configuration if you have a network out of 
sync or any clues about the error .


Best Regards,
Strahil NikolovOn Nov 28, 2019 11:44, Kim Kargaard  
wrote:
>
> Hi, 
>
> I am getting the attached error when trying to move the Display from the one 
> network to the other within the cluster. I can't see any place to set an IP 
> for the vnic that I want to set as the display network. 
>
> Any thoughts? 
>
> Kim 
>
> On 28/11/2019, 06:41, "Strahil"  wrote: 
>
>     As far as I know , the engine plays a role as a proxy during the 
> establishment of the connection. 
>     Check that you can reach both engine and the host from your system. 
>     
>     For the same reason, I use noVNC - as you just need  a single port to the 
> engine in addition to the rest of the settings. 
>     
>     Best Regards, 
>     Strahil NikolovOn Nov 27, 2019 11:27, kim.karga...@noroff.no wrote: 
>     > 
>     > Hi, 
>     > 
>     > When trying to connect from a remote network on the spice console to a 
> VM, I get the following error: 
>     > 
>     > (remote-viewer:80195): virt-viewer-WARNING **: 11:05:22.322: Channel 
> error: Could not connect to proxy server xx.xx.xx.xx: Socket I/O timed out 
>     > 
>     > I found that the display is set to the management network and not the 
> VM networkn in the cluster logical network. However, when I try to set the 
> other vlan to be the display network, I get the following error: 
>     > 
>     > Error while executing action: Cannot edit Network. IP address has to be 
> set for the NIC that bears a role network. Network: student-vlan100, Nic: 
> p2p1.100 on host node3 violates that rule. 
>     > 
>     > I am not sure what this means. Any ideas? 
>     > 
>     > Kind regards 
>     > 
>     > Kim 
>     > ___ 
>     > Users mailing list -- users@ovirt.org 
>     > To unsubscribe send an email to users-le...@ovirt.org 
>     > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
>     > oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
>     > List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YEI3I4NCAHQOTRTANIXDIGUNA32YM6J/
>  
>     
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4Y44AXB3BXGOV7DO6N4ZRPTSZCNC5EA/


[ovirt-users] AWX and error using ovirt as an inventory source

2019-11-28 Thread Gianluca Cecchi
Hello,
I have awx 9.0.1 and ansible 2.8.5 in container of a CentOS 7.7 server.
I'm trying to use oVirt 4.3.6.7-1.el7 as a source of an inventory in awx
but I get error when syncing

Find at bottom below the error messages.
I see that in recent past (around June this year) there were some problems,
but they should be solved now, correct?
There was also a problem in syncing when some powered off VMs were present
in oVirt env, but I think this solved too, correct?

Any way to replicate / test from command line of awx container?
I try some things but in command line I always get error regarding

oVirt inventory script requires ovirt-engine-sdk-python >= 4.0.0

that I think depends on not using correct command line and/or not setting
needed env.

Thanks in advance,
Gianluca

2.536 INFO Updating inventory 4: MYDC_OVIRT
3.011 INFO Reading Ansible inventory source:
/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/plugins/inventory/ovirt4.py
3.013 INFO Using VIRTUAL_ENV: /var/lib/awx/venv/ansible
3.013 INFO Using PATH:
/var/lib/awx/venv/ansible/bin:/var/lib/awx/venv/awx/bin:/var/lib/awx/venv/awx/bin:/var/lib/awx/venv/awx/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
3.013 INFO Using PYTHONPATH:
/var/lib/awx/venv/ansible/lib/python3.6/site-packages:
Traceback (most recent call last):
  File "/var/lib/awx/venv/awx/bin/awx-manage", line 11, in 
load_entry_point('awx==9.0.1.0', 'console_scripts', 'awx-manage')()
  File
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/__init__.py", line
158, in manage
execute_from_command_line(sys.argv)
  File
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py",
line 381, in execute_from_command_line
utility.execute()
  File
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py",
line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
  File
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py",
line 323, in run_from_argv
self.execute(*args, **cmd_options)
  File
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py",
line 364, in execute
output = self.handle(*args, **options)
  File
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py",
line 1153, in handle
raise exc
  File
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py",
line 1043, in handle
venv_path=venv_path, verbosity=self.verbosity).load()
  File
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py",
line 214, in load
return self.command_to_json(base_args + ['--list'])
  File
"/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/inventory_import.py",
line 197, in command_to_json
self.method, proc.returncode, stdout, stderr))
RuntimeError: ansible-inventory failed (rc=1) with stdout:
stderr:
ansible-inventory 2.8.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/var/lib/awx/.ansible/plugins/modules',
'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible-inventory
  python version = 3.6.8 (default, Oct  7 2019, 17:58:22) [GCC 8.2.1
20180905 (Red Hat 8.2.1-3)]
Using /etc/ansible/ansible.cfg as config file
 [WARNING]:  * Failed to parse /var/lib/awx/venv/awx/lib64/python3.6/site-
packages/awx/plugins/inventory/ovirt4.py with script plugin: Inventory
script
(/var/lib/awx/venv/awx/lib64/python3.6/site-
packages/awx/plugins/inventory/ovirt4.py) had an execution error:

  File "/usr/lib/python3.6/site-packages/ansible/inventory/manager.py",
line 268, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
  File
"/usr/lib/python3.6/site-packages/ansible/plugins/inventory/script.py",
line 161, in parse
raise AnsibleParserError(to_native(e))

 [WARNING]: Unable to parse /var/lib/awx/venv/awx/lib64/python3.6/site-
packages/awx/plugins/inventory/ovirt4.py as an inventory source

ERROR! No inventory was parsed, please check your configuration and options.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DGTCAAC5DZHSKGO5YTLMJSOQO3HUCMDC/


[ovirt-users] VDSM Errors see below

2019-11-28 Thread rob . downer
I have removed all deployment of the hosted engine by running the following 
commands.
ovirt-hosted-engine-cleanup
vdsm-tool configure --force
systemctl restart libvirtd
systemctl restart vdsm

on my hosts I have the following ovirt 1 is the host I ran the hosted engine 
setup.

I have set the Gluster network to use the same subnet and set up forward and 
reverse DNS for the Gluster port network NIC's

I had this working using a separate subnet but thought to try it on the same 
subnet to avoid any issues that may have occurred while using a separate 
network subnet.

the main Host IP address is still showing in Unamanaged Connections on Ovirt 1 
.. is this anything to be concerned about after running the commands above...

I have restarted all machines.

All come back with these VDSM errors...

Node 1


[root@ovirt1 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: inactive (dead) since Thu 2019-11-28 13:29:40 UTC; 37min ago
  Process: 31178 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh 
--post-stop (code=exited, status=0/SUCCESS)
  Process: 30721 ExecStart=/usr/share/vdsm/daemonAdapter -0 /dev/null -1 
/dev/null -2 /dev/null /usr/share/vdsm/vdsmd (code=exited, status=0/SUCCESS)
 Main PID: 30721 (code=exited, status=0/SUCCESS)

Nov 26 22:42:49 ovirt1.kvm.private vdsm[30721]: WARN MOM not available, KSM 
stats will be missing.
Nov 26 22:42:49 ovirt1.kvm.private vdsm[30721]: WARN Not ready yet, ignoring 
event '|virt|VM_status|871ce9d5-417a-4278-8446-28b681760c1b' 
args={'871ce9d5-417a-4278-8446-28b681760c1b': {'status': 'Poweri...
Nov 28 13:28:43 ovirt1.kvm.private vdsm[30721]: WARN File: 
/var/run/vdsm/trackedInterfaces/eno2 already removed
Nov 28 13:29:26 ovirt1.kvm.private vdsm[30721]: WARN File: 
/var/lib/libvirt/qemu/channels/871ce9d5-417a-4278-8446-28b681760c1b.com.redhat.rhevm.vdsm
 already removed
Nov 28 13:29:26 ovirt1.kvm.private vdsm[30721]: WARN File: 
/var/lib/libvirt/qemu/channel/target/domain-1-HostedEngineLocal/org.qemu.guest_agent.0
 already removed
Nov 28 13:29:39 ovirt1.kvm.private vdsm[30721]: WARN MOM not available.
Nov 28 13:29:39 ovirt1.kvm.private vdsm[30721]: WARN MOM not available, KSM 
stats will be missing.
Nov 28 13:29:39 ovirt1.kvm.private systemd[1]: Stopping Virtual Desktop Server 
Manager...
Nov 28 13:29:39 ovirt1.kvm.private vdsmd_init_common.sh[31178]: vdsm: Running 
run_final_hooks
Nov 28 13:29:40 ovirt1.kvm.private systemd[1]: Stopped Virtual Desktop Server 
Manager.
Hint: Some lines were ellipsized, use -l to show in full.
[root@ovirt1 ~]# nodectl check
Status: WARN
Bootloader ... OK
  Layer boot entries ... OK
  Valid boot entries ... OK
Mount points ... OK
  Separate /var ... OK
  Discard is used ... OK
Basic storage ... OK
  Initialized VG ... OK

  Initialized Thin Pool ... OK
  Initialized LVs ... OK
Thin storage ... OK
  Checking available space in thinpool ... OK
  Checking thinpool auto-extend ... OK
vdsmd ... BAD

NODE 2
[root@ovirt2 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Thu 2019-11-28 13:57:30 UTC; 1min 13s ago
  Process: 3626 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start 
(code=exited, status=0/SUCCESS)
 Main PID: 5418 (vdsmd)
Tasks: 38
   CGroup: /system.slice/vdsmd.service
   └─5418 /usr/bin/python2 /usr/share/vdsm/vdsmd

Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: libvirt: Network 
Filter Driver error : Network filter not found: no nwfilter with matching name 
'vdsm-no-mac-spoofing'
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running 
dummybr
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running 
tune_system
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running 
test_space
Nov 28 13:57:30 ovirt2.kvm.private vdsmd_init_common.sh[3626]: vdsm: Running 
test_lo
Nov 28 13:57:30 ovirt2.kvm.private systemd[1]: Started Virtual Desktop Server 
Manager.
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN File: 
/var/run/vdsm/trackedInterfaces/eno1 already removed
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN File: 
/var/run/vdsm/trackedInterfaces/eno2 already removed
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN MOM not available.
Nov 28 13:57:32 ovirt2.kvm.private vdsm[5418]: WARN MOM not available, KSM 
stats will be missing.
[root@ovirt2 ~]# nodectl check
Status: OK
Bootloader ... OK
  Layer boot entries ... OK
  Valid boot entries ... OK
Mount points ... OK
  Separate /var ... OK
  Discard is used ... OK
Basic storage ... OK
  Initialized VG ... OK
  Initialized Thin Pool ... OK
  Initialized LVs ... OK
Thin storage ... OK
  Checking available space in thinpool ... OK
  Checking thinpool auto-extend ... OK
vdsmd ... OK
[root@ovirt2 ~]# 


[ovirt-users] Re: HostedEngine Deployment fails on AMD EPYC 7402P 4.3.7

2019-11-28 Thread Ralf Schenk
Hello,

I did something like that via "virsh edit HostedEngine".

But how is the change written back to the shared storage
("hosted_storage" so it stays permanent  for HA Engine ?

I was able to boot up HostedEngine manually via virsh start after
removing the required flag from XML (I first added a user to sasldb in
/etc/libvirt/passwd.db to be able to log into libvirt).

Bye


Am 28.11.2019 um 05:51 schrieb Strahil:
>
> Hi Ralf,
> When the deployment fail - you can dump the xml from virsh , edit it,
> undefine the current HostedEngine and define your modified
> HostedEngine 's  xml.
>
> Once you do that, you can try to start the
> VM.
>
> Good luck.
>
> Best Regards,
> Strahil Nikolov
>
> On Nov 27, 2019 18:28, Ralf Schenk  wrote:
>
> Hello,
>
> This week I tried to deploy Hosted Engine on Ovirt-Node-NG 4.3.7
> based Host.
>
> At the time the locally deployed Engine ist copied to
> hosted-storage (in my case NFS) and deployment tries to start the
> Engine (via ovirt-ha-agent) this fails.
>
> QUEMU Log (/var/log/libvirt/qemu/HostedEngine.log) only shows
> "2019-11-27 16:17:16.833+: shutting down, reason=failed".
>
> Researching the cause is: The built Libvirt VM XML includes the
> feature "virt-ssbd" as requirement, which is simly not there.
>
> From VM XML:
>
>   
>     EPYC
>     
>     
>     
>
> from cat /proc/cpuinfo:
>
> processor   : 47
> vendor_id   : AuthenticAMD
> cpu family  : 23
> model   : 49
> model name  : AMD EPYC 7402P 24-Core Processor
> stepping    : 0
> microcode   : 0x830101c
> cpu MHz : 2800.000
> cache size  : 512 KB
> physical id : 0
> siblings    : 48
> core id : 30
> cpu cores   : 24
> apicid  : 61
> initial apicid  : 61
> fpu : yes
> fpu_exception   : yes
> cpuid level : 16
> wp  : yes
> flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
> pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx
> mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl
> xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni
> pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes
> xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy
> abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce
> topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3
> hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase
> bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb
> sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total
> cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock
> nrip_save tsc_scale vmcb_clean flushbyasid decodeassists
> pausefilter pfthreshold avic v_vmsave_vmload vgif umip
> overflow_recov succor smca
> bogomips    : 5600.12
> TLB size    : 3072 4K pages
> clflush size    : 64
> cache_alignment : 64
> address sizes   : 43 bits physical, 48 bits virtual
> power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]
>
> Any solution/workaround available ?
>
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3ZSUNSVA3RVQRYKBFEJOGB4DZC5NUYB/


[ovirt-users] Re: Certificate of host is invalid

2019-11-28 Thread Jon bae
Am Do., 28. Nov. 2019 um 07:42 Uhr schrieb Milan Zamazal <
mzama...@redhat.com>:

> Strahil  writes:
>
> > Hi ,
> >
> > You can try with:
> > 1. Set the host in maintenance
> > 2. From Install dropdown , select 'reinstall' and then configure the
> > necessary info + whether you would like to use the host as Host for
> > the HostedEngine VM.
>
> Rather than full reinstall, Enroll Certificate action (just next to
> Reinstall in the menu) should be faster and sufficient.  You still need
> to set the host to maintenance before being allowed to do it.
>
>
Thank you very much! I though already I have to do my hands dirty in the
console, but this was very easy!

Regards

Jonathan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZH2IX2IUAVBD3BG55DV2RYYLSKDYMVNM/