[ovirt-users] Re: VM causes CPU blocks and forces reboot of host

2023-05-15 Thread Jeff Bailey
Sounds exactly like some trouble I was having.  I downgraded the kernel 
to 4.18.0-448 and everything is fine.  There have been a couple of 
kernel releases since I had problems but I haven't had a chance to try 
them yet.  I believe it was in 4.18.0-485 that I noticed it but that's 
just from memory.



On 5/11/2023 2:26 PM, dominik.dra...@blackrack.pl wrote:

Hello,
I have recently migrated our customer's cluster to newer hardware (CentOS 8 
Stream, 4 hypervisor nodes, 3 hosts with GlusterFS 5x 6TB SSD as JBOD with 
replica 3). After 1 month from the switch we encounter frequent vm locks that 
need host reboot in order to unlock the VM. Affected vms cannot be powered down 
from ovirt UI. Even if ovirt is successful in powering down affected vms, they 
cannot be booted again with information that OS disk is used. Once I reboot the 
host, vms can be turned on and everything works fine.

In vdsm logs I can note the following error:
  2023-05-11 19:33:12,339+0200 ERROR (qgapoller/1) [virt.periodic.Operation] 
> 
operation failed (periodic:187)
Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py", line 185, in 
__call__
 self._func()
   File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 
476, in _poller
 vm_id, self._qga_call_get_vcpus(vm_obj))
   File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 
797, in _qga_call_get_vcpus
 if 'online' in vcpus:
TypeError: argument of type 'NoneType' is not iterable

/var/log/messages reports:
May 11 19:35:15 kernel: task:CPU 7/KVM   state:D stack:0 pid: 7065 
ppid: 1 flags: 0x8182
May 11 19:35:15 kernel: Call Trace:
May 11 19:35:15 kernel: __schedule+0x2d1/0x870
May 11 19:35:15 kernel: schedule+0x55/0xf0
May 11 19:35:15 kernel: schedule_preempt_disabled+0xa/0x10
May 11 19:35:15 kernel: rwsem_down_read_slowpath+0x26e/0x3f0
May 11 19:35:15 kernel: down_read+0x95/0xa0
May 11 19:35:15 kernel: get_user_pages_unlocked+0x66/0x2a0
May 11 19:35:15 kernel: hva_to_pfn+0xf5/0x430 [kvm]
May 11 19:35:15 kernel: kvm_faultin_pfn+0x95/0x2e0 [kvm]
May 11 19:35:15 kernel: ? select_task_rq_fair+0x355/0x990
May 11 19:35:15 kernel: ? sched_clock+0x5/0x10
May 11 19:35:15 kernel: ? sched_clock_cpu+0xc/0xb0
May 11 19:35:15 kernel: direct_page_fault+0x3b4/0x860 [kvm]
May 11 19:35:15 kernel: kvm_mmu_page_fault+0x114/0x680 [kvm]
May 11 19:35:15 kernel: ? vmx_vmexit+0x9f/0x70d [kvm_intel]
May 11 19:35:15 kernel: ? vmx_vmexit+0xae/0x70d [kvm_intel]
May 11 19:35:15 kernel: ? gfn_to_pfn_cache_invalidate_start+0x190/0x190 [kvm]
May 11 19:35:15 kernel: vmx_handle_exit+0x177/0x770 [kvm_intel]
May 11 19:35:15 kernel: ? gfn_to_pfn_cache_invalidate_start+0x190/0x190 [kvm]
May 11 19:35:15 kernel: vcpu_enter_guest+0xafd/0x18e0 [kvm]
May 11 19:35:15 kernel: ? hrtimer_try_to_cancel+0x7b/0x100
May 11 19:35:15 kernel: kvm_arch_vcpu_ioctl_run+0x112/0x600 [kvm]
May 11 19:35:15 kernel: kvm_vcpu_ioctl+0x2c9/0x640 [kvm]
May 11 19:35:15 kernel: ? pollwake+0x74/0xa0
May 11 19:35:15 kernel: ? wake_up_q+0x70/0x70
May 11 19:35:15 kernel: ? __wake_up_common+0x7a/0x190
May 11 19:35:15 kernel: do_vfs_ioctl+0xa4/0x690
May 11 19:35:15 kernel: ksys_ioctl+0x64/0xa0
May 11 19:35:15 kernel: __x64_sys_ioctl+0x16/0x20
May 11 19:35:15 kernel: do_syscall_64+0x5b/0x1b0
May 11 19:35:15 kernel: entry_SYSCALL_64_after_hwframe+0x61/0xc6
May 11 19:35:15 kernel: RIP: 0033:0x7faf1a1387cb
May 11 19:35:15 kernel: Code: Unable to access opcode bytes at RIP 
0x7faf1a1387a1.
May 11 19:35:15 kernel: RSP: 002b:7fa6f5ffa6e8 EFLAGS: 0246 ORIG_RAX: 
0010
May 11 19:35:15 kernel: RAX: ffda RBX: 55be52e7bcf0 RCX: 
7faf1a1387cb
May 11 19:35:15 kernel: RDX:  RSI: ae80 RDI: 
0027
May 11 19:35:15 kernel: RBP:  R08: 55be5158c6a8 R09: 
0007d9e95a00
May 11 19:35:15 kernel: R10: 0002 R11: 0246 R12: 

May 11 19:35:15 kernel: R13: 55be515bcfc0 R14: 7fffec958800 R15: 
7faf1d6c6000
May 11 19:35:15 kernel: INFO: task worker:714626 blocked for more than 120 
seconds.
May 11 19:35:15 kernel:  Not tainted 4.18.0-489.el8.x86_64 #1
May 11 19:35:15 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.

May 11 19:35:15 kernel: task:worker  state:D stack:0 pid:714626 
ppid: 1 flags:0x0180
May 11 19:35:15 kernel: Call Trace:
May 11 19:35:15 kernel: __schedule+0x2d1/0x870
May 11 19:35:15 kernel: schedule+0x55/0xf0
May 11 19:35:15 kernel: schedule_preempt_disabled+0xa/0x10
May 11 19:35:15 kernel: rwsem_down_read_slowpath+0x26e/0x3f0
May 11 19:35:15 kernel: down_read+0x95/0xa0
May 11 19:35:15 kernel: do_madvise.part.30+0x2c3/0xa40
May 11 19:35:15 kernel: ? syscall_trace_enter+0x1ff/0x2d0
May 11 19:35:15 kernel: ? __x64_sys_madvise+0x26/0x30
May 11 19:35:15 kernel: __x64_sys_madvise+0x26/0x30
May 11 19:35:15 kernel: do_syscall_64+0x5b/0x1b0
May 11 19:35:15 

[ovirt-users] Re: Failed to read or parse '/etc/pki/ovirt-engine/keys/engine.p12'

2023-05-15 Thread - tineidae via Users
I remember having this issue trying to replace my dead centos8 hosted engine on 
to new physical host running centos9 stream. The issues was that rc2 is not 
supported in el9, no matter what, i used openssl to convert all all p12 in 
/etc/pki/ovirt-engine/keys to use aes instead.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDCV2RNFODXO5HKD4RR6422FB7V7P3JF/


[ovirt-users] engine setup fails: error: The system may not be provisioned according to the playbook results

2023-05-15 Thread neeldey427
I'm trying to setup the engine. But I am getting the same error.


[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove temporary entry in 
/etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool 
localvm3a2r5z0y]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool 
localvm3a2r5z0y]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool 
9ef860a6-ee88-4aa6-94ac-a429a90ebec8]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool 
9ef860a6-ee88-4aa6-94ac-a429a90ebec8]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a 
failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system 
may not be provisioned according to the playbook results: please check the logs 
for the issue, fix accordingly or re-deploy from scratch.\n"}


Please let me know if you need more information in this regard or contents from 
any of the log files.

Any & all suggestions on how to fix/troubleshoot this are much appreciated.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHQMECJWDPZKYLRQUI34BJ545LQOYVT5/


[ovirt-users] VM causes CPU blocks and forces reboot of host

2023-05-15 Thread dominik . drazyk
Hello,
I have recently migrated our customer's cluster to newer hardware (CentOS 8 
Stream, 4 hypervisor nodes, 3 hosts with GlusterFS 5x 6TB SSD as JBOD with 
replica 3). After 1 month from the switch we encounter frequent vm locks that 
need host reboot in order to unlock the VM. Affected vms cannot be powered down 
from ovirt UI. Even if ovirt is successful in powering down affected vms, they 
cannot be booted again with information that OS disk is used. Once I reboot the 
host, vms can be turned on and everything works fine. 

In vdsm logs I can note the following error:
 2023-05-11 19:33:12,339+0200 ERROR (qgapoller/1) [virt.periodic.Operation] 
> 
operation failed (periodic:187)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py", line 185, in 
__call__
self._func()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 
476, in _poller
vm_id, self._qga_call_get_vcpus(vm_obj))
  File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 
797, in _qga_call_get_vcpus
if 'online' in vcpus:
TypeError: argument of type 'NoneType' is not iterable

/var/log/messages reports:
May 11 19:35:15 kernel: task:CPU 7/KVM   state:D stack:0 pid: 7065 
ppid: 1 flags: 0x8182
May 11 19:35:15 kernel: Call Trace:
May 11 19:35:15 kernel: __schedule+0x2d1/0x870
May 11 19:35:15 kernel: schedule+0x55/0xf0
May 11 19:35:15 kernel: schedule_preempt_disabled+0xa/0x10
May 11 19:35:15 kernel: rwsem_down_read_slowpath+0x26e/0x3f0
May 11 19:35:15 kernel: down_read+0x95/0xa0
May 11 19:35:15 kernel: get_user_pages_unlocked+0x66/0x2a0
May 11 19:35:15 kernel: hva_to_pfn+0xf5/0x430 [kvm]
May 11 19:35:15 kernel: kvm_faultin_pfn+0x95/0x2e0 [kvm]
May 11 19:35:15 kernel: ? select_task_rq_fair+0x355/0x990
May 11 19:35:15 kernel: ? sched_clock+0x5/0x10
May 11 19:35:15 kernel: ? sched_clock_cpu+0xc/0xb0
May 11 19:35:15 kernel: direct_page_fault+0x3b4/0x860 [kvm]
May 11 19:35:15 kernel: kvm_mmu_page_fault+0x114/0x680 [kvm]
May 11 19:35:15 kernel: ? vmx_vmexit+0x9f/0x70d [kvm_intel]
May 11 19:35:15 kernel: ? vmx_vmexit+0xae/0x70d [kvm_intel]
May 11 19:35:15 kernel: ? gfn_to_pfn_cache_invalidate_start+0x190/0x190 [kvm]
May 11 19:35:15 kernel: vmx_handle_exit+0x177/0x770 [kvm_intel]
May 11 19:35:15 kernel: ? gfn_to_pfn_cache_invalidate_start+0x190/0x190 [kvm]
May 11 19:35:15 kernel: vcpu_enter_guest+0xafd/0x18e0 [kvm]
May 11 19:35:15 kernel: ? hrtimer_try_to_cancel+0x7b/0x100
May 11 19:35:15 kernel: kvm_arch_vcpu_ioctl_run+0x112/0x600 [kvm]
May 11 19:35:15 kernel: kvm_vcpu_ioctl+0x2c9/0x640 [kvm]
May 11 19:35:15 kernel: ? pollwake+0x74/0xa0
May 11 19:35:15 kernel: ? wake_up_q+0x70/0x70
May 11 19:35:15 kernel: ? __wake_up_common+0x7a/0x190
May 11 19:35:15 kernel: do_vfs_ioctl+0xa4/0x690
May 11 19:35:15 kernel: ksys_ioctl+0x64/0xa0
May 11 19:35:15 kernel: __x64_sys_ioctl+0x16/0x20
May 11 19:35:15 kernel: do_syscall_64+0x5b/0x1b0
May 11 19:35:15 kernel: entry_SYSCALL_64_after_hwframe+0x61/0xc6
May 11 19:35:15 kernel: RIP: 0033:0x7faf1a1387cb
May 11 19:35:15 kernel: Code: Unable to access opcode bytes at RIP 
0x7faf1a1387a1.
May 11 19:35:15 kernel: RSP: 002b:7fa6f5ffa6e8 EFLAGS: 0246 ORIG_RAX: 
0010
May 11 19:35:15 kernel: RAX: ffda RBX: 55be52e7bcf0 RCX: 
7faf1a1387cb
May 11 19:35:15 kernel: RDX:  RSI: ae80 RDI: 
0027
May 11 19:35:15 kernel: RBP:  R08: 55be5158c6a8 R09: 
0007d9e95a00
May 11 19:35:15 kernel: R10: 0002 R11: 0246 R12: 

May 11 19:35:15 kernel: R13: 55be515bcfc0 R14: 7fffec958800 R15: 
7faf1d6c6000
May 11 19:35:15 kernel: INFO: task worker:714626 blocked for more than 120 
seconds.
May 11 19:35:15 kernel:  Not tainted 4.18.0-489.el8.x86_64 #1
May 11 19:35:15 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" 
disables this message.

May 11 19:35:15 kernel: task:worker  state:D stack:0 pid:714626 
ppid: 1 flags:0x0180
May 11 19:35:15 kernel: Call Trace:
May 11 19:35:15 kernel: __schedule+0x2d1/0x870
May 11 19:35:15 kernel: schedule+0x55/0xf0
May 11 19:35:15 kernel: schedule_preempt_disabled+0xa/0x10
May 11 19:35:15 kernel: rwsem_down_read_slowpath+0x26e/0x3f0
May 11 19:35:15 kernel: down_read+0x95/0xa0
May 11 19:35:15 kernel: do_madvise.part.30+0x2c3/0xa40
May 11 19:35:15 kernel: ? syscall_trace_enter+0x1ff/0x2d0
May 11 19:35:15 kernel: ? __x64_sys_madvise+0x26/0x30
May 11 19:35:15 kernel: __x64_sys_madvise+0x26/0x30
May 11 19:35:15 kernel: do_syscall_64+0x5b/0x1b0
May 11 19:35:15 kernel: entry_SYSCALL_64_after_hwframe+0x61/0xc6
May 11 19:35:15 kernel: RIP: 0033:0x7faf1a138a4b
May 11 19:35:15 kernel: Code: Unable to access opcode bytes at RIP 
0x7faf1a138a21.
May 11 19:35:15 kernel: RSP: 002b:7faf151ea7f8 EFLAGS: 0206 ORIG_RAX: 
001c
May 11 19:35:15 kernel: RAX: ffda RBX: 7faf149eb000 RCX: 
7faf1a138a4b
May 

[ovirt-users] Migration failed after upgrade engine from 4.3 to 4.4

2023-05-15 Thread Emmanuel Ferrandi

Hi !

When I try to migrate a powered VM (regardless of OS) from one 
hypervisor to another, the VM is immediately shut down with this error 
message:


   /Migration failed: Admin shut down from the engine (VM: VM, Source:
   HP11)./

The oVirt engine has been upgraded from version 4.3 to version 4.4.
Some nodes are in version 4.3 and others in version 4.4.

Here are the oVirt versions for selected hypervisors:

 * HP11 : 4.4
 * HP5 : 4.4
 * HP6 : 4.3

Here are the migration attempts I tried with a powered VM :

 *  From HP > to HP
 * HP6 > HP5 : OK
 * HP6 > HP11 : OK
 * HP5 > HP11 : OK
 * HP5 > HP6 : OK
 * HP11 > HP5 : *NOK*
 * HP11 > HP6 : OK

As mentioned above the migration of a VM between two versions of ovirt 
is not a problem.
The migration of the VM between two HPs with the same 4.4 version works 
only in one direction (HP5 to HP11) and doesn't work in the other way.


I already tried to reinstall both HPs in version 4.4 but without success.

Here are the logs on the HP5 concerning the VM:

   //var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
   (jsonrpc/3) [api.virt] START destroy(gracefulAttempts=1)
   from=:::172.20.3.250,37534, flow_id=43364065,
   vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:48)//
   ///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
   (jsonrpc/3) [api] FINISH destroy error=Virtual machine does not
   exist: {'vmId': 'd14f75cd-1cb1-440b-9780-6b6ee78149ac'} (api:129)//
   ///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
   (jsonrpc/3) [api.virt] FINISH destroy return={'status': {'code': 1,
   'message': "Virtual machine does not exist: {'vmId':
   'd14f75cd-1cb1-440b-9780-6b6ee78149ac'}"}}
   from=:::172.20.3.250,37534, flow_id=43364065,
   vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:54)/

   //var/log/libvirt/qemu/VM.log:2023-03-24 14:56:51.474+:
   initiating migration//
   ///var/log/libvirt/qemu/VM.log:2023-03-24 14:56:54.342+:
   shutting down, reason=migrated//
   ///var/log/libvirt/qemu/VM.log:2023-03-24T14:56:54.870528Z qemu-kvm:
   terminating on signal 15 from pid 4379 ()/

Here are the log on the engine concerning the VM:

   //

   //var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,333+02 INFO 
   [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default
   task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
   MigrateVDSCommand(
   MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
   vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
   dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
   dstHost='HP5:54321', migrationMethod='ONLINE',
   tunnelMigration='false', migrationDowntime='0', autoConverge='true',
   migrateCompressed='false', migrateEncrypted='null',
   consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
   maxIncomingMigrations='2', maxOutgoingMigrations='2',
   convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
   stalling=[{limit=1, action={name=setDowntime, params=[150]}},
   {limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
   action={name=setDowntime, params=[300]}}, {limit=4,
   action={name=setDowntime, params=[400]}}, {limit=6,
   action={name=setDowntime, params=[500]}}, {limit=-1,
   action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
   6a3507d0//
   ///var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,334+02 INFO
   [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
   (default task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
   MigrateBrokerVDSCommand(HostName = HP11,
   MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
   vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
   dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
   dstHost='HP5:54321', migrationMethod='ONLINE',
   tunnelMigration='false', migrationDowntime='0', autoConverge='true',
   migrateCompressed='false', migrateEncrypted='null',
   consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
   maxIncomingMigrations='2', maxOutgoingMigrations='2',
   convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
   stalling=[{limit=1, action={name=setDowntime, params=[150]}},
   {limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
   action={name=setDowntime, params=[300]}}, {limit=4,
   action={name=setDowntime, params=[400]}}, {limit=6,
   action={name=setDowntime, params=[500]}}, {limit=-1,
   action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
   f254f72//
   ///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,246+02 INFO 
   [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
   (ForkJoinPool-1-worker-9) [3f0e966d] VM
   'd14f75cd-1cb1-440b-9780-6b6ee78149ac' was reported as Down on VDS
   '6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)//
   ///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,296+02 INFO 
   [org.ovirt.engine.core.bll.SaveVmExternalDataCommand]
   (ForkJoinPool-1-worker-9) [43364065] Running 

[ovirt-users] Re: barely started - cannot import name 'Callable' from 'collections'

2023-05-15 Thread itprof13
# vi 
/usr/share/ovirt-hosted-engine-setup/he_ansible/callback_plugins/2_ovirt_logger.py

Needs to replace "collections" -> "collections.abc"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/STSJHA3KBSIAKNX67CEPFXEVVHPUDE5V/


[ovirt-users] Remove role AdminReadOnly from Everyone group that has made my admin readonly

2023-05-15 Thread robert . r . bristow
We accidently added AdminReadOnly to users and has now made admin read only.
Please does some one know how we can remove that to get the system back in 
Ovirt engine.
I found the following but with being new and links not working it does not help 
much.

Regards
Rob
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GHFM2J2I7SOHGRVB2N3A7WRP5VU7JJ2D/


[ovirt-users] [ansible]attach vdisk to vm

2023-05-15 Thread Pietro Pesce
Hello ever1

i create a playbook for create and attach vdisk (from direct lun) to vm, the 
firs block work. I want attach the created vdisk to second vm. how can do?

---

# Add fiber chanel disk
- name: Create disk
  ovirt.ovirt.ovirt_disk:
auth: "{{ ovirt_auth }}"
name: "{{ item.0 }}"
host: "{{host}}"
shareable: True
interface: virtio_scsi
vm_name: "{{hostname}}"
scsi_passthrough: disabled
logical_unit:
id: "{{ item.1 }}"
storage_type: fcp
  loop: "{{ disk_name | zip(lun) | list }}"

## Add disk second node
#- name: Create disk
#  ovirt.ovirt.ovirt_disk:
#auth: "{{ ovirt_auth }}"
#vm_name: "{{hostname2}}"
#name: "{{ item.0 }}"
#host: "{{host}}"
#interface: virtio_scsi
#logical_unit:
#id: "{{ item.1 }}"
#storage_type: fcp
#  loop: "{{ disk_name | zip(lun) | list }}"


thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCNAKTTSNKYDJIQ5ZE44XOTAHGF3YOG4/