Thanks again for the help on this.

Summary: Got past one roadblock only to hit another.
Time to give up.

Feel free to review my notes in case I missed anything.

I had to give myself a bit of crash course in Ansible to follow the playbook.
See if I have this correct:

~~~
    100   - name: Parse server CPU list
    101     set_fact:
102 server_cpu_dict: "{{ server_cpu_dict | combine({item.split(':')[1]: item.split(':')[3]}) }}"
    103     with_items: >-
104 {{ server_cpu_list.json['values']['system_option_value'][0]['value'].split('; ')|list|difference([ '']) }}
    105   - debug: var=server_cpu_dict
    106   - name: Debugging why deployment fails #1
    107     debug:
    108       msg:
109 - "Loop is done over {{server_cpu_list.json['values']['system_option_value'][0]['value'].split('; ')|list|difference(['']) }}" 110 - "Actual value of server_cpu_dict before the set_fact is {{server_cpu_dict }}"
    111   - name : Debuggind when deployment fails #2
    112     debug:
    113       msg:
    114       - "cluster_cpu is {{ cluster_cpu }}"
    115       - "ovirt_cluster_info is {{ ovirt_cluster_info }}"
    116   - name: Convert CPU model name
    117     set_fact:
    118       cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"
    119   - debug: var=cluster_cpu_model
~~~

Now we fail on my Debug #2 task.

[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ovirt_cluster_info' is undefined\n\nThe error appears to be in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml': line 111, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - \"Actual value of server_cpu_dict before the set_fact is {{server_cpu_dict }}\"\n - name : Debuggind when deployment fails #2\n ^ here\n"}

Tracing this backwards

'server_cpu_dict' is set in 'Parse server CPU list'.
It is built from server_cpu_list

So, I need to add the debug after that fact is created. The debug failed because I had it before. I moved it and nowit traces correctly. In fact there is already a debug at line 105 for that. Mine is now just a duplicate, but I kept it as a learning expereince for instrumenting Ansible.

server_cpu_list is set in 'Get server CPU list via REST API'.
It is the return from the API call to the running local instance


INFO ] ["Loop is done over ['1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64', ' 2:Secure Intel Nehalem Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64', ' 3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64', ' 4:Secure Intel Westmere Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64', ' 5:Intel SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64', ' 6:Secure Intel SandyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64', ' 7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64', ' 8:Secure Intel IvyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64', ' 9:Intel Haswell Family:vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64', ' 10:Secure Intel Haswell Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64', ' 11:Intel Broadwell Family:vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64', ' 12:Secure Intel Broadwell Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64', ' 13:Intel Skylake Client Family:vmx,nx,model_Skylake-Client:Skylake-Client,-hle,-rtm:x86_64', ' 14:Secure Intel Skylake Client Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64', ' 15:Intel Skylake Server Family:vmx,nx,model_Skylake-Server:Skylake-Server,-hle,-rtm:x86_64', ' 16:Secure Intel Skylake Server Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64', ' 17:Intel Cascadelake Server Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64', ' 18:Secure Intel Cascadelake Server Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64', ' 1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64', ' 2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64', ' 3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64', ' 4:Secure AMD EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64', ' 1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64', ' 2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64', ' 1:IBM z114, z196:sie,model_z196-base:z196-base:s390x', ' 2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x', ' 3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x', ' 4:IBM z14:sie,model_z14-base:z14-base:s390x']", "Actual value of server_cpu_dict before the set_fact is {'Intel Nehalem Family': 'Nehalem', 'Secure Intel Nehalem Family': 'Nehalem,+spec-ctrl,+ssbd,+md-clear', 'Intel Westmere Family': 'Westmere', 'Secure Intel Westmere Family': 'Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear', 'Intel SandyBridge Family': 'SandyBridge', 'Secure Intel SandyBridge Family': 'SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear', 'Intel IvyBridge Family': 'IvyBridge', 'Secure Intel IvyBridge Family': 'IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear', 'Intel Haswell Family': 'Haswell-noTSX', 'Secure Intel Haswell Family': 'Haswell-noTSX,+spec-ctrl,+ssbd,+md-clear', 'Intel Broadwell Family': 'Broadwell-noTSX', 'Secure Intel Broadwell Family': 'Broadwell-noTSX,+spec-ctrl,+ssbd,+md-clear', 'Intel Skylake Client Family': 'Skylake-Client,-hle,-rtm', 'Secure Intel Skylake Client Family': 'Skylake-Client,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm', 'Intel Skylake Server Family': 'Skylake-Server,-hle,-rtm', 'Secure Intel Skylake Server Family': 'Skylake-Server,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm', 'Intel Cascadelake Server Family': 'Cascadelake-Server,-hle,-rtm,+arch-capabilities', 'Secure Intel Cascadelake Server Family': 'Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities', 'AMD Opteron G4': 'Opteron_G4', 'AMD Opteron G5': 'Opteron_G5', 'AMD EPYC': 'EPYC', 'Secure AMD EPYC': 'EPYC,+ibpb,+virt-ssbd', 'IBM POWER8': 'POWER8', 'IBM POWER9': 'POWER9', 'IBM z114, z196': 'z196-base', 'IBM zBC12, zEC12': 'zEC12-base', 'IBM z13s, z13': 'z13-base', 'IBM z14': 'z14-base'}"]

Making sure it matches the API

https://ovirt-engine.home.io:6900/ovirt-engine/api/options/ServerCPUList?version=4.4

{
  "values": {
    "system_option_value": [
      {
"value": "1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; 2:Secure Intel Nehalem Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64; 3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; 4:Secure Intel Westmere Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; 5:Intel SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; 6:Secure Intel SandyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; 7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64; 8:Secure Intel IvyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; 9:Intel Haswell Family:vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64; 10:Secure Intel Haswell Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64; 11:Intel Broadwell Family:vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64; 12:Secure Intel Broadwell Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64; 13:Intel Skylake Client Family:vmx,nx,model_Skylake-Client:Skylake-Client,-hle,-rtm:x86_64; 14:Secure Intel Skylake Client Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64; 15:Intel Skylake Server Family:vmx,nx,model_Skylake-Server:Skylake-Server,-hle,-rtm:x86_64; 16:Secure Intel Skylake Server Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64; 17:Intel Cascadelake Server Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64; 18:Secure Intel Cascadelake Server Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64; 1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; 2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; 3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; 4:Secure AMD EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64; 1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64; 2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64; 1:IBM z114, z196:sie,model_z196-base:z196-base:s390x; 2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x; 3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x; 4:IBM z14:sie,model_z14-base:z14-base:s390x;",
        "version": "4.4"
      }
    ]
  },
  "name": "ServerCPUList",
  "href": "/ovirt-engine/api/options/ServerCPUList",
  "id": "ServerCPUList"
}


So our issue must be with 'cluster_cpu.type'

'cluster_cpu' is set in 'Parse Cluster details'.
This is build from 'cluster_facts.ovirt_clusters'

cluster_facts.ovirt_clusters is set in 'Fetch Cluster name'

cluster_facts is set in 'Fetch Cluster facts'

I have not gotten into 'ovirt_cluster_info' module yet.
https://docs.ansible.com/ansible/latest/modules/ovirt_cluster_module.html

I went ahead and dug around in the docs for the other API calls. I found the one that appears to match the ovirt Anbile module call.
http://ovirt.github.io/ovirt-engine-api-model/4.4/#services/cluster

I can get the ID of the default cluster, then pull the details.

https://ovirt-engine.home.io:6900/ovirt-engine/api/clusters/b198fd0c-f8df-11ea-b99b-00163e5d8bc7

"cpu": {
    "architecture": "undefined",
    "type": ""
  }


So, the 'Type' and 'Architecture' are still undefined for the cluster, even though the first node is added at this point. I though I would try a post to the API to change the settings, just to see if I could make any progress.

PUT https://ovirt-engine.home.io:6900/ovirt-engine/api/clusters/b198fd0c-f8df-11ea-b99b-00163e5d8bc7
<cluster>
  <cpu>
    <type>Intel Nehalem Family</type>
    <architecture>x86_64</architecture>
  </cpu>
</cluster>

Result:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<fault>
<detail>[Cannot change Cluster CPU to higher CPU type when there are active Hosts with lower CPU type. -Please move Host ovirt-node01.home.io with lower CPU to maintenance first.]</detail>
    <reason>Operation Failed</reason>
</fault>

Let's find the first node:

https://ovirt-engine.home.io:6900/ovirt-engine/api/hosts

      "address": "ovirt-node01.home.io",
      "auto_numa_status": "unknown",
      "certificate": {
        "organization": "home.io",
        "subject": "O=home.io,CN=ovirt-node01.home.io"
      },
      "cpu": {
        "name": "Intel(R) Xeon(R) CPU            5130  @ 2.00GHz",
        }


Summary so far:
We have the HostedEngineLocal instance online.
The first node is joined to the cluster.
However, we do not set the cpu architecture or cpu type properties on the cluster object. This is the cause of the playbook failure when we attempt to access the undefined property. I might dig a little more into this to understand why it's not set on the cluster, but that will get into ovirt code and probably not worth the effort.

I am able to set the cpu architecture via the API

PUT https://ovirt-engine.home.io:6900/ovirt-engine/api/clusters/b198fd0c-f8df-11ea-b99b-00163e5d8bc7
<cluster>
  <cpu>
    <architecture>x86_64</architecture>
  </cpu>
</cluster>


"cpu": {
        "architecture": "x86_64",
        "type": ""
      },

[ INFO ] ["cluster_cpu is {'architecture': 'x86_64', 'type': ''}"]

But, as expected, we still fail on the 'type' call in
    cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"

[ INFO ] TASK [ovirt.hosted_engine_setup : Convert CPU model name]

[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute ''\n\nThe error appears to be in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml': line 115, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - \"cluster_cpu is {{ cluster_cpu }}\"\n - name: Convert CPU model name\n ^ here\n"}


After going through all of this, which was still good for learning, my "fix" was pretty simple. I just needed something to validate against that dictionary. I thought about trying to change the CPU value on the host, so I could get a better API return, but that may cause more issues. It was easier just to set the fact so we could move past that point in the playbook.
So I used to lowest value from the list.

~~~
    115   - name: HACK - Set cluster_cpu.type
    116     set_fact:
117 cluster_cpu: {'type':'Intel Nehalem Family', 'architecture':'x86_64'}
    118   - name: HACK - Trace the fact we set
    119     debug:
    120       msg:
    121       - "cluster cpu is {{ cluster_cpu }}"
~~~


[ INFO ] ["cluster_cpu is {'architecture': 'x86_64', 'type': ''}"]
[ INFO ] TASK [ovirt.hosted_engine_setup : HACK - Set cluster_cpu.type]
[ INFO ] ["cluster cpu is {'type': 'Intel Nehalem Family', 'architecture': 'x86_64'}"]

Now the hosted engine deployment progressed and the image file is created on the NFS storage.
[ INFO ] TASK [ovirt.hosted_engine_setup : Add HE disks]

And now it fails here.
[ FAILED  ]     Add HE disks
"msg": "Timeout exceed while waiting on result state of the entity."



The disk copies over fine, along with metadata. It's not that. When I dig into the job API I find a large queue of failed jobs: "description": "Adding unmanaged VMs running on Host ovirt-node01.home.io to Cluster Default",
"status": "failed",

Looks like it loops through attempting this operation until it times out.

Something is still stopping that HostedEngineLocal instance from transferring over to the cluster. Tempted to look into the ovirt_disk_28.py and ovirt.py to see if I can track down why it won't transfer to the cluster. But out of time and need to get real work done instead of playing.

True end result, as much as this was entertaining and educational, this ate up way too much time from the OKD work I needed to do. Was hoping to use the Install Provisions Infrastructure option with the ovirt/OKD integration. Guess it's back to UPI on a different hypervisor until I can get some newer hardware.






On 9/17/2020 12:54 AM, Strahil Nikolov wrote:
It seems that this one fails :

- name: Parse server CPU list
   set_fact:
     server_cpu_dict: "{{ server_cpu_dict |
       combine({item.split(':')[1]: item.split(':')[3]}) }}"

In cases like that I usually define a new variable.

Can you put another task before that like:
- name: Debug server_cpu_dict
   debug:
     var: server_cpu_dict


Best Regards,
Strahil Nikolov


В четвъртък, 17 септември 2020 г., 00:30:57 Гринуич+3, Michael Blanton 
<mblan...@vnet.net> написа:





In my previous reply:

Ansible task reports them as Xeon 5130.
According to Intel Ark these fall in the Woodcrest family, which is
older the Nehalem.

Xeon 5130 "Woodcrest"
Do you need something more specific or different?

I also found one a reply from you on an older thread and added it:

~~~
     100  - name: Debug why parsing fails
     101    debug:
     102      msg:
     103      - "Loop is done over
{{server_cpu_list.json['values']['system_option_value'][0]['value'].split(';')|list|
         difference(['']) }}"
     104      - "Actual value of server_cpu_dict before the set_fact is
{{server_cpu_dict }}"
     105  - name: Parse server CPU list
     106    set_fact:
     107      server_cpu_dict: "{{ server_cpu_dict |
combine({item.split(':')[1]: item.split(':')[3]}) }}"
     108    with_items: >-
     109      {{
server_cpu_list.json['values']['system_option_value'][0]['value'].split(';
')|list|difference(['']) }}
     110  - debug: var=server_cpu_dict
     111  - name: Convert CPU model name
     112    set_fact:
     113      cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"
     114  - debug: var=cluster_cpu_model
~~~

   [ INFO ] ["Loop is done over ['1:Intel Nehalem
Family:vmx,nx,model_Nehalem:Nehalem:x86_64', ' 2:Secure Intel Nehalem
Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64',
' 3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64', '
4:Secure Intel Westmere
Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64',
' 5:Intel SandyBridge
Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64', ' 6:Secure Intel
SandyBridge
Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64',
' 7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64', '
8:Secure Intel IvyBridge
Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64',
' 9:Intel Haswell
Family:vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64', ' 10:Secure
Intel Haswell
Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64',
' 11:Intel Broadwell
Family:vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64', ' 12:Secure
Intel Broadwell
Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64',
' 13:Intel Skylake Client
Family:vmx,nx,model_Skylake-Client:Skylake-Client,-hle,-rtm:x86_64', '
14:Secure Intel Skylake Client
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64',
' 15:Intel Skylake Server
Family:vmx,nx,model_Skylake-Server:Skylake-Server,-hle,-rtm:x86_64', '
16:Secure Intel Skylake Server
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear,-hle,-rtm:x86_64',
' 17:Intel Cascadelake Server
Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64',
' 18:Secure Intel Cascadelake Server
Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64',
' 1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64', ' 2:AMD
Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64', ' 3:AMD
EPYC:svm,nx,model_EPYC:EPYC:x86_64', ' 4:Secure AMD
EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64', ' 1:IBM
POWER8:powernv,model_POWER8:POWER8:ppc64', ' 2:IBM
POWER9:powernv,model_POWER9:POWER9:ppc64', ' 1:IBM z114,
z196:sie,model_z196-base:z196-base:s390x', ' 2:IBM zBC12,
zEC12:sie,model_zEC12-base:zEC12-base:s390x', ' 3:IBM z13s,
z13:sie,model_z13-base:z13-base:s390x', ' 4:IBM
z14:sie,model_z14-base:z14-base:s390x']", 'Actual value of
server_cpu_dict before the set_fact is {}']

[ INFO ] TASK [ovirt.hosted_engine_setup : Parse server CPU list]

[ INFO ] ok: [localhost]

[ INFO ] TASK [ovirt.hosted_engine_setup : Convert CPU model name]

[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
option with an undefined variable. The error was: 'dict object' has no
attribute ''\n\nThe error appears to be in
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml':
line 110, column 15, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n - debug:
var=server_cpu_dict\n ^ here\n\nThere appears to be both 'k=v' shorthand
syntax and YAML in this task. Only one syntax may be used.\n"}




On 9/16/2020 4:14 PM, Strahil Nikolov wrote:
You didn't mention your CPU type.

Best Regards,
Strahil Nikolov






В сряда, 16 септември 2020 г., 20:44:23 Гринуич+3, Michael Blanton 
<mblan...@vnet.net> написа:





Wondering if there are any suggestions here before I wipe these nodes
and go back to another Hypervisor.




On 9/14/2020 12:59 PM, Michael Blanton wrote:
Thanks for the quick response.

Ansible task reports them as Xeon 5130.
According to Intel Ark these fall in the Woodcrest family, which is
older the Nehalem.

Obviously the CPUs support virtualization.
I also confirmed the required extensions from the oVirt documents.

# grep -E 'svm|vmx' /proc/cpuinfo | grep n

Question for my lab:
So is this a situation where "Woodcrest" is simply not in the dictionary?
Is there a way to manually add that or "force" it, just to get the
engine to deploy? That way I can kick the tires on oVirt while I
consider an upgrade to my lab systems. Knowing ahead of time that it is
a "hack" and unsupported.

Question for product:
If this is an unsupported CPU, shouldn't the installer/Hosted Engine
Deployment flag that at the beginning of the process, not 45 minutes
later when trying to move the already created VM to shared storage?

Thanks again



On 9/14/2020 12:45 PM, Edward Berger wrote:
What is the CPU?  I'm asking because you said it was old servers, and
at some point oVirt started filtering out old CPU types which were no
longer supported under windows.   There was also the case where if a
certain bios option wasn't enabled (AES?) a westmere(supported)
reported as an older model(unsupported).


On Mon, Sep 14, 2020 at 12:20 PM <mblan...@vnet.net
<mailto:mblan...@vnet.net>> wrote:

       I am attempting a new oVirt install. I have two nodes installed
       (with oVirt Node 4.4). I have NFS shared storage for the hosted
engine.
       Both nodes are Dell quad core Xeon CPUs with 32GB of RAM. Both have
       been hypervisors before, XCP-NG and Proxmox. However I'm very
       interested to learn oVirt now.

       The hosted engine deployment (through cockpit) fails during the
       "Finish" stage.
       I do see the initial files created on the NFS storage.

       [ INFO ] TASK [ovirt.hosted_engine_setup : Convert CPU model name]
       [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
       an option with an undefined variable. The error was: 'dict object'
       has no attribute ''\n\nThe error appears to be in
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml':

       line 105, column 16, but may\nbe elsewhere in the file depending on
       the exact syntax problem.\n\nThe offending line appears to be:\n\n#
       - debug: var=server_cpu_dict\n ^ here\n\nThere appears to be both
       'k=v' shorthand syntax and YAML in this task. Only one syntax may be
       used.\n"}

       2020-09-13 17:39:56,507+0000 ERROR ansible failed {
            "ansible_host": "localhost",
            "ansible_playbook":
       "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
            "ansible_result": {
                "_ansible_no_log": false,
                "msg": "The task includes an option with an undefined
       variable. The error was: 'dict object' has no attribute ''
       \n\nThe error appears to be in
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_targ

       et_hosted_engine_vm.yml': line 105, column 16, but may\nbe elsewhere
       in the file depending on the exact syntax problem.\
       n\nThe offending line appears to be:\n\n#  - debug:
       var=server_cpu_dict\n               ^ here\n\nThere appears to be bo
       th 'k=v' shorthand syntax and YAML in this task. Only one syntax may
       be used.\n"
            },
            "ansible_task": "Convert CPU model name",
            "ansible_type": "task",
            "status": "FAILED",
            "task_duration": 1
       }

       I can see the host engine is created and running locally on the node.
       I can event SSH into the HostedEngineLocal instance.

       [root@ovirt-node01]# virsh --readonly list
         Id   Name                State
       -----------------------------------
         1    HostedEngineLocal   running


       Looking at the "Convert CPU model name" task:
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml

<https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml>


       set_fact:
              cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"

       server_cpu_dict is good, I can find that in the logs, cluster_cpu is
       undefined.
       But this is normal correct? The Cluster CPU type is "undefined"
       until the first host is added to the cluster.
       The error makes it seems that server_cpu_dict and not
       cluster_cpu.type is the problem.
       I'm not sure this is really the problem, but that is the only
undefined variable I can find.

       Any advice or recommendation is appreciated
       -Thanks in advance
       _______________________________________________
       Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
       To unsubscribe send an email to users-le...@ovirt.org
       <mailto:users-le...@ovirt.org>
       Privacy Statement: https://www.ovirt.org/privacy-policy.html
       <https://www.ovirt.org/privacy-policy.html>
       oVirt Code of Conduct:
       https://www.ovirt.org/community/about/community-guidelines/
       <https://www.ovirt.org/community/about/community-guidelines/>
       List Archives:


_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYI5DDI7RNVSWESZOS7A7HVDZBEQXI4D/

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BMPWNPOTDZRBGWHYJGKHEDMEYJ7LWETU/

Reply via email to