[ovirt-users] Re: How to pass parameters between VDSM Hooks domxml in single run

2019-10-03 Thread Vrgotic, Marko
Hi Michal,

Thank you. Would you be so kind to provide me with additional clarification?

> you can’t just add a random tag into libvirt xml in a random place, it will 
> be dropped by libvirt.
 I understand, thank you. About persistence of added tag, it was not 
used/written during 1xMigration, but it was present in domxml in 2xMigration. 

> you can add it to metadata though. we use that for ovirt-specific information
Can you please provide some more HowTo/HowNotTo information?
Can we manipulate the tag in metadata section in each iteration?
I assume VM metadata shared/communicated between Hosts or read and provided to 
Hosts by oVirt-Engine?
In short we are trying to achive:
- start migration
  - ex: 10_create_tag inserts   tag into XML 
metadata section  <= maybe we can use before_vm_migration_source hook 
  - migration is finished and after_vm_destroy hooks comes to turn:
  - ex:  20_nsupdate reads the metadata and:
 - if tag  exists,  do not run dns 
update, but remove the tag
 - if tag  does not exists, run dns 
update and remove the tag

Kindly awaiting your reply.

Marko Vrgotic

On 03/10/2019, 12:27, "Michal Skrivanek"  wrote:



> On 2 Oct 2019, at 13:29, Vrgotic, Marko  wrote:
> 
> Any ideas
>  
> From: "Vrgotic, Marko" 
> Date: Friday, 27 September 2019 at 17:26
> To: "users@ovirt.org" 
> Subject: How to pass parameters between VDSM Hooks domxml in single run
>  
> Dear oVIrt,
>  
> A while ago we discussed on ways to change/update content of parameters 
of domxml in certain action.
>  
> As I mentioned before, we have added the VDSMHook 60_nsupdate which 
removes the DNS record entries when a VM is destroyed:
>  
> …
> domxml = hooking.read_domxml()
> name = domxml.getElementsByTagName('name')[0]
> name = " ".join(name.nodeValue for name in name.childNodes if 
name.nodeType == name.TEXT_NODE)
> nsupdate_commands = """server {server_ip}
> update delete {vm_name}.example.com a
> update delete {vm_name}. example.com 
> update delete {vm_name}. example.com txt
> send
> """.format(server_ip="172.16.1.10", vm_name=name)
> …
>  
> The goal:
> However, we did not want to execute remove dns records when VM is only 
migrated. Since its considered a “destroy” action we took following approach.
>   • In state “before_vm_migrate_source add hook which will write flag 
“is_migration” to domxml
>   • Once VM is scheduled for migration, this hook should add the flag 
“is_migration” to domxml
>   • Once 60_nsupdate is triggered, it will check for the flag and if 
there, skip executing dns record action, but only remove the flag 
“is_migration” from domxml of the VM
>  
> …
> domxml = hooking.read_domxml()
> migration = domxml.createElement("is_migration")
> domxml.getElementsByTagName("domain")[0].appendChild(migration)
> logging.info("domxml_updated {}".format(domxml.toprettyxml()))
> hooking.write_domxml(domxml)
> …
>  
> When executing first time, we observed that flag “
>  
> hookiesvm
> fcfa66cb-b251-43a3-8e2b-f33b3024a749
> http://ovirt.org/vm/tune/1.0; 
xmlns:ns1="http://ovirt.org/vm/1.0;>
> 
> http://ovirt.org/vm/1.0;>
> 
4.3
> False
> 
false
> 1024
> 1024
> ...skipping...
> 
> 
> 
> 
> system_u:system_r:svirt_t:s0:c169,c575
> 
system_u:object_r:svirt_image_t:s0:c169,c575
> 
> 
> +107:+107
> +107:+107
> 
> 

you can’t just add a random tag into libvirt xml in a random place, it will 
be dropped by libvirt.
you can add it to metadata though. we use that for ovirt-specific 
information

>   
> is added to domxml, but was present once 60_nsupdate hook was executed.
>  
> The question: How do we make sure that, when domxml is updated, that the 
update is visible/usable by following hook, in single run? How to pass these 
changes between hooks?
>  
> Kindly awaiting your reply.
>  
>  
> — — —
> Met vriendelijke groet / Kind regards,
> 
> Marko Vrgotic
>  
>  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 

[ovirt-users] Re: Ovirt 4.2.7 won't start and drops to emergency console

2019-10-03 Thread jeremy_tourville
I did some checking and my disk is not in a state I expected. (The system 
doesn't even know the VG exists in it's present state)   See the results:
# pv
  PV VG  Fmt  Attr PSize   PFree 
  /dev/md127 onn_vmh lvm2 a--  222.44g 43.66g
  /dev/sdd1  gluster_vg3 lvm2 a--   <4.00g <2.00g

# pvs -a
  PVVG  Fmt  Attr PSize 
  PFree 
  /dev/md127onn_vmh lvm2 a--  
222.44g 43.66g
  /dev/onn_vmh/home  ---   
0  0 
  /dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0+1  ---   
0  0 
  /dev/onn_vmh/root  ---   
0  0 
  /dev/onn_vmh/swap  ---   
0  0 
  /dev/onn_vmh/tmp   ---   
0  0 
  /dev/onn_vmh/var   ---   
0  0 
  /dev/onn_vmh/var_crash ---   
0  0 
  /dev/onn_vmh/var_log   ---   
0  0 
  /dev/onn_vmh/var_log_audit ---   
0  0 
  /dev/sda1  ---   
0  0 
  /dev/sdb1  ---   
0  0 
  /dev/sdd1 gluster_vg3 lvm2 a--   
<4.00g <2.00g
  /dev/sde1  ---   
0  0 

# vgs
  VG  #PV #LV #SN Attr   VSize   VFree 
  gluster_vg3   1   1   0 wz--n-  <4.00g <2.00g
  onn_vmh   1  11   0 wz--n- 222.44g 43.66g

# vgs -a
  VG  #PV #LV #SN Attr   VSize   VFree 
  gluster_vg3   1   1   0 wz--n-  <4.00g <2.00g
  onn_vmh   1  11   0 wz--n- 222.44g 43.66g

# lvs
  LV   VG  Attr   LSize   Pool   
Origin Data%  Meta%  Move Log Cpy%Sync Convert
  tmpLVgluster_vg3 -wi---   2.00g   
   
  home onn_vmh Vwi-aotz--   1.00g pool00
4.79   
  ovirt-node-ng-4.2.7.1-0.20181216.0   onn_vmh Vwi---tz-k 146.60g pool00 
root  
  ovirt-node-ng-4.2.7.1-0.20181216.0+1 onn_vmh Vwi-aotz-- 146.60g pool00 
ovirt-node-ng-4.2.7.1-0.20181216.0 4.81   
  pool00   onn_vmh twi-aotz-- 173.60g   
7.21   2.30
  root onn_vmh Vwi-a-tz-- 146.60g pool00
2.92   
  swap onn_vmh -wi-ao   4.00g   
   
  tmp  onn_vmh Vwi-aotz--   1.00g pool00
53.66  
  var  onn_vmh Vwi-aotz--  15.00g pool00
15.75  
  var_crashonn_vmh Vwi-aotz--  10.00g pool00
2.86   
  var_log  onn_vmh Vwi-aotz--   8.00g pool00
14.73  
  var_log_auditonn_vmh Vwi-aotz--   2.00g pool00
6.91 

# lvs -a  
  LV   VG  Attr   LSize   Pool   
Origin Data%  Meta%  Move Log Cpy%Sync Convert
  tmpLVgluster_vg3 -wi---   2.00g   
   
  home onn_vmh Vwi-aotz--   1.00g pool00
4.79   
  [lvol0_pmspare]  onn_vmh ewi--- 180.00m   
   
  ovirt-node-ng-4.2.7.1-0.20181216.0   onn_vmh Vwi---tz-k 146.60g pool00 
root  
  ovirt-node-ng-4.2.7.1-0.20181216.0+1 onn_vmh Vwi-aotz-- 146.60g pool00 
ovirt-node-ng-4.2.7.1-0.20181216.0 4.81   
  pool00   onn_vmh twi-aotz-- 173.60g   
 

[ovirt-users] mount to removed storage domain on node with HostedEngine

2019-10-03 Thread Mark Steele
Hello,

oVirt Engine Version: 3.5.0.1-1.el6

We recently removed the Data (Master) storage domain from our ovirt cluster
and replaced it with another. All is working great. When looking at the old
storage device I noticed that one of our nodes still has an NFS connection
to it.

Looking at the results for 'mount' I see two mounts to the node in question
(192.168.64.15):

192.168.64.15:/nfs-share/ovirt-store/hosted-engine on
/rhev/data-center/mnt/192.168.64.15:_nfs-share_ovirt-store_hosted-engine
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.15,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.64.15)
192.168.64.11:/export/testovirt on
/rhev/data-center/mnt/192.168.64.11:_export_testovirt
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.11,mountvers=3,mountport=46034,mountproto=udp,local_lock=none,addr=192.168.64.11)
192.168.64.163:/export/storage on
/rhev/data-center/mnt/192.168.64.163:_export_storage
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.163,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.64.163)
192.168.64.55:/export/storage on
/rhev/data-center/mnt/192.168.64.55:_export_storage
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.55,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.64.55)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
192.168.64.15:/nfs-share/ovirt-store/hosted-engine on
/rhev/data-center/mnt/192.168.64.15:_nfs-share_ovirt-store_hosted-engine
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.15,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.64.15)
10.1.90.64:/ifs/telvue/infrastructure/iso on
/rhev/data-center/mnt/10.1.90.64:_ifs_telvue_infrastructure_iso type nfs
(rw,relatime,vers=3,rsize=131072,wsize=524288,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.1.90.64,mountvers=3,mountport=300,mountproto=udp,local_lock=none,addr=10.1.90.64)
192.168.64.163:/export/storage/iso-store on
/rhev/data-center/mnt/192.168.64.163:_export_storage_iso-store type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.163,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.64.163)

/etc/fstab has no entry for these so I assume they are left over from when
the storage domain existed.

Is it safe to 'umount' these mounts or is there a hook I may not be aware
of? Is there another way of removing this from the node via that OVM?

None of the other nodes in the cluster have this mount. This node is not
the SPM.

Thank you for your time and consideration.

Best regards,

***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CF4VZNG2M4LRALJCMADRX4T3UT25TK74/


[ovirt-users] Re: ovirt 4.3.6 kickstart install fails when

2019-10-03 Thread adrianquintero
I tried the suggestions from here but same issue:
https://rhv.bradmin.org/ovirt-engine/docs/Installing_Red_Hat_Virtualization_as_a_standalone_Manager_with_local_databases/Installing_Hosts_for_RHV_SM_localDB_deploy.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKKGBJENU4YHBZUNURAL4WGTM62FX5PZ/


[ovirt-users] oVirt Gluster Fails to Failover after node failure

2019-10-03 Thread Robert Crawford
One of my nodes has failed and the domain isn't coming online because the 
primary node isn't up? 
In the parameters there is backup-volfile-servers=192.168.100.2:192.168.100.3 
Any help?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HOSMQIUHDCA4URMUHOC4BYXBIAICB2RO/


[ovirt-users] ovirt 4.3.6 kickstart install fails when

2019-10-03 Thread adrianquintero
Kikckstart entries:

-
liveimg --url=http://192.168.1.10/ovirt-iso-436/ovirt-node-ng-image.squashfs.img

clearpart --drives=sda --initlabel --all
autopart --type=thinp
rootpw --iscrypted $1$xxxbSLxxgwc0
lang en_US
keyboard --vckeymap=us --xlayouts='us'
timezone --utc America/New_York 
--ntpservers=0.centos.pool.ntp.org,1.centos.pool.ntp.org,2.centos.pool.ntp.org,3.centos.pool.ntp
#network  --hostname=host21.example.com
network --onboot yes --device eno49
zerombr
text

reboot
-

The Error on screen right after " Creating swap on /dev/mapper/onn_host1-swap"
DeviceCreatorError: ('lvcreate failed for onn_host1/pool00: running /sbin/lvm 
lvcreate --thinpool onn_host1/pool00 --size 464304m --poolmetadatasize 232 
--chunksize 64 --config devices { preffered_names=["^/dev/mapper/", "^/dev/md", 
"^/dev/sd"] } failed', ' onn_host1-pool00')


No issue with oVirt 4.3.5 and the same kickstart file.


Any suggestions?

Thanks,

AQ
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YVHULSDL3WMSR52IYLP7VKLGSVPDQZGY/


[ovirt-users] Re: oVirt 4.3.5 WARN no gluster network found in cluster

2019-10-03 Thread adrianquintero
It seems to be working
ping to host1.example.com returns the main management IP.

From host1.example.com I can ping the 2 gluster IPs that are configured in the 
other 2 hosts that conform the cluster, i.e. ping -I ens4f1 192.168.0.68. 
However I cant ping to itself i.e. ping -I ens4f1 192.168.0.69.

and in the engine logs I still see:
 WARN  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler7) [780d3310] Could not associate brick 
host1.example.com:/gluster_bricks/vmstore/vmstore' of volume 
'x9e0f-649055e0e07b' with correct network as no gluster network found 
in cluster 'xx-11e9-b8d3-00163e5d860d'
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/22NZCQGUETHGNIUNLPIH6H2ZYGCBKJTK/


[ovirt-users] oVirt Gluster Volume Cannot Find UUID

2019-10-03 Thread Robert Crawford
Hey Guys,

After an update my node goes into emergency mode saying it can't find the UUID 
associated with one of my physical volumes?
I tried an imbase rollback and nothing

pvs is showing that the identifier is missing
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M46TDU6S3JTGLA44NB7R3RQTNKJVECZJ/


[ovirt-users] Re: How to pass parameters between VDSM Hooks domxml in single run

2019-10-03 Thread Michal Skrivanek


> On 3 Oct 2019, at 12:50, Vrgotic, Marko  wrote:
> 
> Hi Michal,
> 
> Thank you. Would you be so kind to provide me with additional clarification?
> 
>> you can’t just add a random tag into libvirt xml in a random place, it will 
>> be dropped by libvirt.
> I understand, thank you. About persistence of added tag, it was not 
> used/written during 1xMigration, but it was present in domxml in 2xMigration. 
> 
>> you can add it to metadata though. we use that for ovirt-specific information
> Can you please provide some more HowTo/HowNotTo information?
> Can we manipulate the tag in metadata section in each iteration?
> I assume VM metadata shared/communicated between Hosts or read and provided 
> to Hosts by oVirt-Engine?
> In short we are trying to achive:
>   - start migration
>  - ex: 10_create_tag inserts   tag into XML 
> metadata section  <= maybe we can use before_vm_migration_source hook 
>  - migration is finished and after_vm_destroy hooks comes to turn:
>  - ex:  20_nsupdate reads the metadata and:
> - if tag  exists,  do not run dns 
> update, but remove the tag
> - if tag  does not exists, run dns 
> update and remove the tag

so..hmm..if IIUC you basically just need not to execute dns update(remove the 
entry for vm) when VM migrates away. and do execute it when it shuts down.

maybe i can suggest two other approaches which could work? these would be 
preferable because manipulating with libvirt’s xml at the time of lifecycle 
changes are better to be avoided. libvirt is touching the xml at the same tie 
and it may run into ugly locking problems.
how about
- use after_vm_migrate_source and make a note of that vmid (touch a file in 
/tmp/ or whatever), and then check that in after_vm_destroy
or
- use before_vm_destroy and use vdsm-client VM getStats vmid=‘xyz’ to get the 
curent VM’s status from vdsm (before it goes away) and you should see if it’s 
“Migration Source” vs anything else(Powering Down for ordinary shutdowns or Up 
for crashes, I guess)

Thanks,
michal

> 
> Kindly awaiting your reply.
> 
> Marko Vrgotic
> 
> On 03/10/2019, 12:27, "Michal Skrivanek"  wrote:
> 
> 
> 
>> On 2 Oct 2019, at 13:29, Vrgotic, Marko  wrote:
>> 
>> Any ideas
>> 
>> From: "Vrgotic, Marko" 
>> Date: Friday, 27 September 2019 at 17:26
>> To: "users@ovirt.org" 
>> Subject: How to pass parameters between VDSM Hooks domxml in single run
>> 
>> Dear oVIrt,
>> 
>> A while ago we discussed on ways to change/update content of parameters of 
>> domxml in certain action.
>> 
>> As I mentioned before, we have added the VDSMHook 60_nsupdate which removes 
>> the DNS record entries when a VM is destroyed:
>> 
>> …
>> domxml = hooking.read_domxml()
>>name = domxml.getElementsByTagName('name')[0]
>>name = " ".join(name.nodeValue for name in name.childNodes if 
>> name.nodeType == name.TEXT_NODE)
>>nsupdate_commands = """server {server_ip}
>> update delete {vm_name}.example.com a
>> update delete {vm_name}. example.com 
>> update delete {vm_name}. example.com txt
>> send
>> """.format(server_ip="172.16.1.10", vm_name=name)
>> …
>> 
>> The goal:
>> However, we did not want to execute remove dns records when VM is only 
>> migrated. Since its considered a “destroy” action we took following approach.
>>  • In state “before_vm_migrate_source add hook which will write flag 
>> “is_migration” to domxml
>>  • Once VM is scheduled for migration, this hook should add the flag 
>> “is_migration” to domxml
>>  • Once 60_nsupdate is triggered, it will check for the flag and if 
>> there, skip executing dns record action, but only remove the flag 
>> “is_migration” from domxml of the VM
>> 
>> …
>> domxml = hooking.read_domxml()
>>migration = domxml.createElement("is_migration")
>>domxml.getElementsByTagName("domain")[0].appendChild(migration)
>>logging.info("domxml_updated {}".format(domxml.toprettyxml()))
>>hooking.write_domxml(domxml)
>> …
>> 
>> When executing first time, we observed that flag “
>> 
>>hookiesvm
>>fcfa66cb-b251-43a3-8e2b-f33b3024a749
>>http://ovirt.org/vm/tune/1.0; 
>> xmlns:ns1="http://ovirt.org/vm/1.0;>
>>
>>http://ovirt.org/vm/1.0;>
>>4.3
>>> type="bool">False
>>false
>>> type="int">1024
>>> type="int">1024
>> ...skipping...
>>> slot="0x09" type="pci"/>
>>
>>
>>
>>system_u:system_r:svirt_t:s0:c169,c575
>>
>> system_u:object_r:svirt_image_t:s0:c169,c575
>>
>>
>>+107:+107
>>+107:+107
>>
>>
> 
>you can’t just add a random tag into libvirt xml in a random place, it 
> will be dropped by libvirt.
>you can add it to metadata 

[ovirt-users] Re: How to pass parameters between VDSM Hooks domxml in single run

2019-10-03 Thread Michal Skrivanek


> On 2 Oct 2019, at 13:29, Vrgotic, Marko  wrote:
> 
> Any ideas
>  
> From: "Vrgotic, Marko" 
> Date: Friday, 27 September 2019 at 17:26
> To: "users@ovirt.org" 
> Subject: How to pass parameters between VDSM Hooks domxml in single run
>  
> Dear oVIrt,
>  
> A while ago we discussed on ways to change/update content of parameters of 
> domxml in certain action.
>  
> As I mentioned before, we have added the VDSMHook 60_nsupdate which removes 
> the DNS record entries when a VM is destroyed:
>  
> …
> domxml = hooking.read_domxml()
> name = domxml.getElementsByTagName('name')[0]
> name = " ".join(name.nodeValue for name in name.childNodes if 
> name.nodeType == name.TEXT_NODE)
> nsupdate_commands = """server {server_ip}
> update delete {vm_name}.example.com a
> update delete {vm_name}. example.com 
> update delete {vm_name}. example.com txt
> send
> """.format(server_ip="172.16.1.10", vm_name=name)
> …
>  
> The goal:
> However, we did not want to execute remove dns records when VM is only 
> migrated. Since its considered a “destroy” action we took following approach.
>   • In state “before_vm_migrate_source add hook which will write flag 
> “is_migration” to domxml
>   • Once VM is scheduled for migration, this hook should add the flag 
> “is_migration” to domxml
>   • Once 60_nsupdate is triggered, it will check for the flag and if 
> there, skip executing dns record action, but only remove the flag 
> “is_migration” from domxml of the VM
>  
> …
> domxml = hooking.read_domxml()
> migration = domxml.createElement("is_migration")
> domxml.getElementsByTagName("domain")[0].appendChild(migration)
> logging.info("domxml_updated {}".format(domxml.toprettyxml()))
> hooking.write_domxml(domxml)
> …
>  
> When executing first time, we observed that flag “
>  
> hookiesvm
> fcfa66cb-b251-43a3-8e2b-f33b3024a749
> http://ovirt.org/vm/tune/1.0; 
> xmlns:ns1="http://ovirt.org/vm/1.0;>
> 
> http://ovirt.org/vm/1.0;>
> 4.3
>  type="bool">False
> false
>  type="int">1024
>  type="int">1024
> ...skipping...
>  slot="0x09" type="pci"/>
> 
> 
> 
> system_u:system_r:svirt_t:s0:c169,c575
> 
> system_u:object_r:svirt_image_t:s0:c169,c575
> 
> 
> +107:+107
> +107:+107
> 
> 

you can’t just add a random tag into libvirt xml in a random place, it will be 
dropped by libvirt.
you can add it to metadata though. we use that for ovirt-specific information

>   
> is added to domxml, but was present once 60_nsupdate hook was executed.
>  
> The question: How do we make sure that, when domxml is updated, that the 
> update is visible/usable by following hook, in single run? How to pass these 
> changes between hooks?
>  
> Kindly awaiting your reply.
>  
>  
> — — —
> Met vriendelijke groet / Kind regards,
> 
> Marko Vrgotic
>  
>  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IC4J6CAJUQOSLU3ZJPX3ZHTUM4HUCMGU/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHQ4VQ4IXO6GM5MOGQSHAX74SCP2QUJD/


[ovirt-users] Re: ansible module to copy floating disks

2019-10-03 Thread Eyal Shenitzky
On Thu, 3 Oct 2019 at 12:45, Gianluca Cecchi 
wrote:

> On Thu, Oct 3, 2019 at 6:55 AM Eyal Shenitzky  wrote:
>
>> You can use the update_storage_domains action.
>> According to the action implementation [1], it seems that you need to
>> specify where you want the disk to appear (in which storage domains).
>>
>> For example:
>> If the disks already reside on sd1 and you want to copy it to sd2, you
>> need to specify both sd1 and sd2.
>>
>> [1]
>> https://github.com/ansible/ansible/blob/25ac7042b070b22c5377f7a43399c19060a38966/lib/ansible/modules/cloud/ovirt/ovirt_disk.py#L532
>> [2] -
>> https://docs.ansible.com/ansible/latest/modules/ovirt_disk_module.html
>>
>>

 --
 Regards,
 Eyal Shenitzky

>>>
>>>
> I can try, thanks.
> But is it supported on block based storage such as iSCSI or FC?
> I see this in your [1] above
> "
> # We don't support move for non file based storages:
> if disk.storage_type != otypes.DiskStorageType.IMAGE:
> return changed
> "
>

Disk type IMAGE is the term for both Block and File-based disks.
So I guess that there is a problem with the documentation.


> Also it is not clear in my opinion the action connected to the different
> "state" possibilities: present/absent/attached/detached
> In web admin gui I can have a disk active on a VM and I can:
>
> - deactivate the disk
> I see red down arrow for the disk that remains associated with the VM
>
> - remove the disk
> a) remove permanently removes the disk from storage
> b) if I don't select "remove permanently" the disk goes into the floating
> disks list
>
> How do they map with Ansible module state options?
>

You can have a look at the 'example' section to get more information on the
supported actions.



>
> Gianluca
> Gianluca
>
>
>

-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2OYHOTUREIUJ5F5ZU4L5RKFMCGGLCB3I/


[ovirt-users] Re: ansible module to copy floating disks

2019-10-03 Thread Gianluca Cecchi
On Thu, Oct 3, 2019 at 6:55 AM Eyal Shenitzky  wrote:

> You can use the update_storage_domains action.
> According to the action implementation [1], it seems that you need to
> specify where you want the disk to appear (in which storage domains).
>
> For example:
> If the disks already reside on sd1 and you want to copy it to sd2, you
> need to specify both sd1 and sd2.
>
> [1]
> https://github.com/ansible/ansible/blob/25ac7042b070b22c5377f7a43399c19060a38966/lib/ansible/modules/cloud/ovirt/ovirt_disk.py#L532
> [2] -
> https://docs.ansible.com/ansible/latest/modules/ovirt_disk_module.html
>
>
>>>
>>> --
>>> Regards,
>>> Eyal Shenitzky
>>>
>>
>>
I can try, thanks.
But is it supported on block based storage such as iSCSI or FC?
I see this in your [1] above
"
# We don't support move for non file based storages:
if disk.storage_type != otypes.DiskStorageType.IMAGE:
return changed
"

Also it is not clear in my opinion the action connected to the different
"state" possibilities: present/absent/attached/detached
In web admin gui I can have a disk active on a VM and I can:

- deactivate the disk
I see red down arrow for the disk that remains associated with the VM

- remove the disk
a) remove permanently removes the disk from storage
b) if I don't select "remove permanently" the disk goes into the floating
disks list

How do they map with Ansible module state options?

Gianluca
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CCBJECFDFUFGDXMWRBUT7PWBZ7TPOODY/


[ovirt-users] Re: Fwd: NEsted oVirt with Ryzen

2019-10-03 Thread Milan Zamazal
"JoseMa(G-Mail)"  writes:

> Hi folks,
> When trying to start a VM in a nested env with Ryzen it complains with:
>
> 2019-09-28 14:29:53,940-0400 ERROR (vm/0391a661) [virt.vm]
> (vmId='0391a661-20fd-490a-9653-dd217147224d') The vm start process failed
> (vm:933)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 867, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2880, in
> _run
> dom.createWithFlags(flags)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
> line 131, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94,
> in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1110, in
> createWithFlags
> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
> dom=self)
>
>
> libvirtError: the CPU is incompatible with host CPU: Host CPU does not
> provide required features: monitor
>
>
> The hooks for nestedv are installed.  Is there any way to modify the xml
> passed to the host used by libvirt and remove the monitor flag ?? Like this
>
> 

Hi, I think 'cpuflags' hook can be used for the purpose, see the
documentation in its before_vm_start.py file how to use it.  In case
it's not enough for you, you can write your own (much simpler)
before_vm_start hook to perform the transformation.

HTH,
Milan

> Lab is installed using Centos latest and oVirt latest as today!. By the way
> a nested intel cpu box works with no problem.
>
>
> THANKS!!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DPJ66NL5QYYFCIREI6JKIEWQMDXZG6L4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TDQEDULJ2HEGAFKSFKOMXQO2DH2EVKTU/


[ovirt-users] Re: web admin: snapshots UI

2019-10-03 Thread Yedidyah Bar David
Now filed this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1758068

Best regards,

On Thu, Sep 19, 2019 at 2:49 PM Laura Wright  wrote:

> I'd be happy to take a pass at doing another design for it. I would tend
> to agree that the general info of the snapshot should be surfaced at a
> higher level.
>
> On Thu, Sep 19, 2019 at 2:46 AM Yedidyah Bar David 
> wrote:
>
>> Hi all,
>>
>> I'd like to ask/suggest something:
>>
>> In the previous UI (4.2?), when you entered the VM snapshots page, you
>> saw per each snapshot some details that you do not see in the current
>> UI. Specifically, to see the creation date, you have to press General.
>> So I got used to writing longer descriptions, with the creation date
>> in them, but that also does not work well, because in the main view
>> only the start of the description is shown. At least for me
>> (combination of browser, OS, etc.), there is a lot of white unused
>> space between the end of the presented-part of the description, and
>> the "> General" after it. Can we somehow use this space to either add
>> the creation date, or more/all of the description, or even make this
>> customizable? And perhaps make this a sortable table (so that you can
>> sort by description, or if you added Data, you can sort by that, or
>> memory, or whatever)? I don't mind opening a bug/RFE for this, if more
>> people agree that it's useful, and unless there are some (unknown to
>> me) reasons against that (other than the time needed for implementing
>> this, which I expect is quite minimal for the simplest option of just
>> showing more of the description).
>>
>> Thanks and best regards,
>> --
>> Didi
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UY6PJYQGFWS6VDRDMP7SVLFC5D4HZCB6/
>>
>
>
> --
>
> Laura Wright
>
> She/Her/Hers
>
> UXD Team
>
> Red Hat Massachusetts 
>
> 314 Littleton Rd
>
> lwri...@redhat.com
> 
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V4EFPUM3KKDO4HEK7EIPKFUOU6AC6RJ4/


[ovirt-users] Re: Fwd: Unable to Upgrade

2019-10-03 Thread Akshita Jain
I followed the same steps as done by Jayme and I've double checked with infra 
also, there is no issue with infra. Why does gluster peer status shows 
disconnected after it gets upgraded?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HM3EBYFOXRZKEK4FKUV6VBRI7J34KS73/