[ovirt-users] Re: Any plans for oVirt and SBD fencing (a.k.a. poison pill)

2019-05-17 Thread thutrangctp
Sbd is currently limited to 255 nodes per partition.  If you have a need to 
configure larger clusters http://mutilateadoll2game.com , create multiple sbd 
partitions, split the watch daemons across them, and configure one external/sbd 
STONITH resource per sbd device
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJC5RJ2JIHNLZDMJBDQJJHPX752ASNSN/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-05-17 Thread dan . midthun
I have a recording of the entire process, though its ~25 mins long that I can 
post.  Also after the failure I was able to look at the engine log and have a 
recording of that as well.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6GDGHJSBL3D6ASHXVIPILGFZDMOALFNO/


[ovirt-users] Re: Wrong disk size in UI after expanding iscsi direct LUN

2019-05-17 Thread Scott Dickerson
On Thu, May 16, 2019 at 11:11 AM Bernhard Dick  wrote:

> Hi,
>
> I've extended the size of one of my direct iSCSI LUNs. The VM is seeing
> the new size but in the webinterface there is still the old size
> reported. Is there a way to update this information? I already took a
> look into the list but there are only reports regarding updating the
> size the VM sees.
>

What ovirt version?  Which webinterface and view are you checking, Admin
Portal or VM Portal?


>
>Best regards
>  Bernhard
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/54YHISUA66227IAMI2UVPZRIXV54BAKA/
>


-- 
Scott Dickerson
Senior Software Engineer
RHV-M Engineering - UX Team
Red Hat, Inc
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SX73NWL5ACZXIU33UFNREQ65GGMATDUP/


[ovirt-users] Re: deprecating export domain?

2019-05-17 Thread Nir Soffer
On Wed, May 15, 2019 at 3:52 PM Andreas Elvers <
andreas.elvers+ovirtfo...@solutions.work> wrote:

> Maybe I overlooked the information, but in recent RHVE 4.3 docs the
> information how to use the export storage domain have been removed and
> there is no alternative to do so, but to detach a data domain and attach it
> somewhere else. But how can I move my VMs one by one to a new storage
> domain on a different datacenter without completely detaching the original
> storage domain?
>

It not clear what do you mean by different datacenter.

The best way to decommission a storage domain is to move the disks to
another domain in the same DC.
You can do this while the VM is running, without any downtime. When you are
done, you can detach and
remove the old storage domain.

If you want to move the VM to a different storage domain on another oVirt
DC, move the domain to the same
DC until you finish the migration, and then move the domain back to another
DC and import the VM. If you
want to use the same domain for exporting and importing, you will need to
move the VM to another domain
on the target DC.

If you want to move the VM to another oVirt setup, you can attach a
temporary storage domain, move the disk
to that storage domain, detach the domain, attach it to the other setup,
and import the VM.

If you can replicate the storage using your storage server (e.g, take a
snapshot of a LUN), you can
attach the new LUN to the new setup and import the VMs.   (this is how
oVirt DR works)

If you don't have shared storage between the two setups, maybe different
physical datacenters, you can:
- export OVA, and import it on the other setup
- download the vm disks, upload them to the other setup and recreated the vm

To minimize downtime while importing and exporting a VM using attach/detach
storage domain:

On the source setup:
1. Attach the temporary storage domain used for moving vms
2. While the VM is running, move the disks to the temporary storage domain
3. Stop the VM
4. Detach the temporary storage domain

On the destination setup:
5. Attach the temporary storage domain to other setup
6. Import the VM
7. Start the VM
8. While the VM is running, move the disks to the target storage domain

Steps 3-7 should take only 2-3 minutes, and do no data operations.
Exporting and importing big VMs
using export domain can take many minutes or hours.

This can be automated using oVirt REST API, SDK, or Ansible.

I don't want to bring down all of my VMs on the old storage domain for
> import. I want to export and import them one by one. When all VMs are moved
> to the new data center only then I want to decommission
> the old data center.


> What is the rationale to deprecate the export storage and already remove
> documentation when there seems to be no alternative available?
>

There are many alternatives for several versions, listed above. The main
alternative is attach/detach
storage domain.

This method has many advantages like:

- If you can keep the VMs on the same storage, requires no data copies,
minimizing total time to
  move the VMs around.
- If you cannot keep the VMs on same storage, requires up to 2 data copies,
like export domain
- But unlike export domain, you can do the copies in the background while
the VM is running
  (see my example above).
- Does not require NFS storage on your high end iSCSI/FC setup
- Since you can use block storage, more reliable and perform better due of
multipath
- Since we use regular data domain, easier to maintain and less likely to
break
- Works with any recent storage format (V3, V4, V5), while export domain
requires V1. Assuming that
  all future version of a product will support all storage formats was
never a good idea.

We are playing with a new way to move VM with minimal downtime, using a the
concept of "external disk".
With this you will be able to run a tool that will shutdown the VM on one
setup, and start it in seconds
on the other setup. While the VM is running, it will migrate the disks from
the old storage to new storage.
This method does not require shared storage to be available to both setups,
only that we can expose
the source disks over the network, for example using NBD.

There is a proof of concept here:
https://gerrit.ovirt.org/c/98926/

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFOK55O5N4SRU5PA32P3LATW74E7WKT6/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-05-17 Thread dan . midthun
100% clean
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FTUQNRG6V6QZUJE3ULOB4BV4SVWKKWAV/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-05-17 Thread Simone Tiraboschi
On Fri, May 17, 2019 at 4:54 PM  wrote:

> Simone,
> What would be causing the Cannot aquire host ID??
>

Was that NFS folder absolutely empty?


>
> 2019-05-17 09:03:57,638-04 INFO
> [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
> (default task-1) [9126b578-9eac-4517-9e56-960c4751c3d1] Lock Acquired to
> object
> 'EngineLock:{exclusiveLocks='[e85f74bd-5e43-4a8c-8158-eb4696e041bc=STORAGE]',
> sharedLocks=''}'
> 2019-05-17 09:03:57,653-04 INFO
> [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
> (default task-1) [9126b578-9eac-4517-9e56-960c4751c3d1] Running command:
> AttachStorageDomainToPoolCommand internal: false. Entities affected :  ID:
> e85f74bd-5e43-4a8c-8158-eb4696e041bc Type: StorageAction group
> MANIPULATE_STORAGE_DOMAIN with role type ADMIN,  ID:
> adf59b7a-78a1-11e9-82af-00163e729513 Type: StoragePoolAction group
> MANIPULATE_STORAGE_DOMAIN with role
> type ADMIN
> 2019-05-17 09:03:57,678-04 INFO
> [org.ovirt.engine.core.bll.storage.pool.AddStoragePoolWithStoragesCommand]
> (default task-1) [66bed180] Running command:
> AddStoragePoolWithStoragesCommand internal: true. Entities affected :  ID:
> adf59b7a-78a1-11e9-82af-00163e729513 Type: StoragePoolAction group
> CREATE_STORAGE_POOL with role type ADMIN
> 2019-05-17 09:03:57,725-04 INFO
> [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand]
> (default task-1) [2091c1ca] Running command: ConnectStorageToVdsCommand
> internal: true. Entities affected :  ID:
> aaa0----123456789aaa Type: SystemAction group
> CREATE_STORAGE_DOMAIN with role type ADMIN
> 2019-05-17 09:03:57,732-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-1) [2091c1ca] START, ConnectStorageServerVDSCommand(HostName
> = host-93.home.local,
> StorageServerConnectionManagementVDSParameters:{hostId='7f7408f3-5558-4f9f-81f8-fa5c3f10c3f9',
> storagePoolId='----', storageType='NFS',
> connectionList='[StorageServerConnections:{id='7e8e82be-f033-487f-892c-16f9f12c31b7',
> connection='10.10
> .32.211:/storage/Dixon', iqn='null', vfsType='null', mountOptions='',
> nfsVersion='AUTO', nfsRetrans='null', nfsTimeo='null', iface='null',
> netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 4cbacffc
> 2019-05-17 09:03:57,756-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-1) [2091c1ca] FINISH, ConnectStorageServerVDSCommand, return:
> {7e8e82be-f033-487f-892c-16f9f12c31b7=0}, log id: 4cbacffc
> 2019-05-17 09:03:57,768-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
> (default task-1) [2091c1ca] START,
> HSMGetStorageDomainInfoVDSCommand(HostName = host-93.home.local,
> HSMGetStorageDomainInfoVDSCommandParameters:{hostId='7f7408f3-5558-4f9f-81f8-fa5c3f10c3f9',
> storageDomainId='e85f74bd-5e43-4a8c-8158-eb4696e041bc'}), log id: 5f095b14
> 2019-05-17 09:03:58,110-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand]
> (default task-1) [2091c1ca] FINISH, HSMGetStorageDomainInfoVDSCommand,
> return:  id='e85f74bd-5e43-4a8c-8158-eb4696e041bc'}, null>, log id: 5f095b14
> 2019-05-17 09:03:58,113-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default task-1) [2091c1ca] START, CreateStoragePoolVDSCommand(HostName =
> host-93.home.local,
> CreateStoragePoolVDSCommandParameters:{hostId='7f7408f3-5558-4f9f-81f8-fa5c3f10c3f9',
> storagePoolId='adf59b7a-78a1-11e9-82af-00163e729513',
> storagePoolName='Default',
> masterDomainId='e85f74bd-5e43-4a8c-8158-eb4696e041bc',
> domainsIdList='[e85f74bd-5e43-4a8c-8158-eb4696e041bc]',
>  masterVersion='2'}), log id: 58aabe3d
> 2019-05-17 09:03:59,128-04 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default task-1) [2091c1ca] Failed in 'CreateStoragePoolVDS' method
> 2019-05-17 09:03:59,134-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [2091c1ca] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802),
> VDSM host-93.home.local command CreateStoragePoolVDS failed: Cannot acquire
> host id: (u'e85f74bd-5e43-4a8c-8158-eb4696e041bc', SanlockException(19,
> 'Sanlock lockspace add failure', 'No such device'))
> 2019-05-17 09:03:59,134-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default task-1) [2091c1ca] Command
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand'
> return value 'StatusOnlyReturn [status=Status [code=661, message=Cannot
> acquire host id: (u'e85f74bd-5e43-4a8c-8158-eb4696e041bc',
> SanlockException(19, 'Sanlock lockspace add failure', 'No such device'))]]'
> 2019-05-17 09:03:59,134-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default task-1) [2091c1ca] HostName = host-93.home.local
> 2019-05-17 09:03:59,134-04 ERROR
> 

[ovirt-users] Re: ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-17 Thread Nir Soffer
On Fri, May 17, 2019 at 6:13 PM Nir Soffer  wrote:

> On Fri, May 17, 2019 at 2:47 PM Andreas Elvers <
> andreas.elvers+ovirtfo...@solutions.work> wrote:
>
>> Yeah. But I think this ist just an artefact of the current version. All
>> images are in sync.
>>  dom_md/ids is an obsolete file anyway as the docs say.
>>
>
> This page was correct about 10 years ago, the ids file is used for sanlock
> delta leases, which are
> the core infrastructure of oVirt. Without this file, you will not have any
> kind of storage.
>

Should be fixed in:
https://github.com/oVirt/ovirt-site/pull/1994


>
> Please use RHV documentation:
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/
>
> And the source:
> https://github.com/ovirt
>
> Anything else is not reliable source for information.
>
> Nir
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TD7QYHMTT6JO5EMFMCRKWY4NO2EM632N/


[ovirt-users] Re: ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-17 Thread Nir Soffer
On Fri, May 17, 2019 at 2:47 PM Andreas Elvers <
andreas.elvers+ovirtfo...@solutions.work> wrote:

> Yeah. But I think this ist just an artefact of the current version. All
> images are in sync.
>  dom_md/ids is an obsolete file anyway as the docs say.
>

This page was correct about 10 years ago, the ids file is used for sanlock
delta leases, which are
the core infrastructure of oVirt. Without this file, you will not have any
kind of storage.

Please use RHV documentation:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/

And the source:
https://github.com/ovirt

Anything else is not reliable source for information.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZW46B6YFSUWG5CIJYHWLFMMA6N7HK6R/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-05-17 Thread dan . midthun
Simone,
What would be causing the Cannot aquire host ID??

2019-05-17 09:03:57,638-04 INFO  
[org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] 
(default task-1) [9126b578-9eac-4517-9e56-960c4751c3d1] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[e85f74bd-5e43-4a8c-8158-eb4696e041bc=STORAGE]', 
sharedLocks=''}'
2019-05-17 09:03:57,653-04 INFO  
[org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] 
(default task-1) [9126b578-9eac-4517-9e56-960c4751c3d1] Running command: 
AttachStorageDomainToPoolCommand internal: false. Entities affected :  ID: 
e85f74bd-5e43-4a8c-8158-eb4696e041bc Type: StorageAction group 
MANIPULATE_STORAGE_DOMAIN with role type ADMIN,  ID: 
adf59b7a-78a1-11e9-82af-00163e729513 Type: StoragePoolAction group 
MANIPULATE_STORAGE_DOMAIN with role 
type ADMIN
2019-05-17 09:03:57,678-04 INFO  
[org.ovirt.engine.core.bll.storage.pool.AddStoragePoolWithStoragesCommand] 
(default task-1) [66bed180] Running command: AddStoragePoolWithStoragesCommand 
internal: true. Entities affected :  ID: adf59b7a-78a1-11e9-82af-00163e729513 
Type: StoragePoolAction group CREATE_STORAGE_POOL with role type ADMIN
2019-05-17 09:03:57,725-04 INFO  
[org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] 
(default task-1) [2091c1ca] Running command: ConnectStorageToVdsCommand 
internal: true. Entities affected :  ID: aaa0----123456789aaa 
Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2019-05-17 09:03:57,732-04 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(default task-1) [2091c1ca] START, ConnectStorageServerVDSCommand(HostName = 
host-93.home.local, 
StorageServerConnectionManagementVDSParameters:{hostId='7f7408f3-5558-4f9f-81f8-fa5c3f10c3f9',
 storagePoolId='----', storageType='NFS', 
connectionList='[StorageServerConnections:{id='7e8e82be-f033-487f-892c-16f9f12c31b7',
 connection='10.10
.32.211:/storage/Dixon', iqn='null', vfsType='null', mountOptions='', 
nfsVersion='AUTO', nfsRetrans='null', nfsTimeo='null', iface='null', 
netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 4cbacffc
2019-05-17 09:03:57,756-04 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(default task-1) [2091c1ca] FINISH, ConnectStorageServerVDSCommand, return: 
{7e8e82be-f033-487f-892c-16f9f12c31b7=0}, log id: 4cbacffc
2019-05-17 09:03:57,768-04 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand] 
(default task-1) [2091c1ca] START, HSMGetStorageDomainInfoVDSCommand(HostName = 
host-93.home.local, 
HSMGetStorageDomainInfoVDSCommandParameters:{hostId='7f7408f3-5558-4f9f-81f8-fa5c3f10c3f9',
 storageDomainId='e85f74bd-5e43-4a8c-8158-eb4696e041bc'}), log id: 5f095b14
2019-05-17 09:03:58,110-04 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand] 
(default task-1) [2091c1ca] FINISH, HSMGetStorageDomainInfoVDSCommand, return: 
, log id: 5f095b14
2019-05-17 09:03:58,113-04 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] 
(default task-1) [2091c1ca] START, CreateStoragePoolVDSCommand(HostName = 
host-93.home.local, 
CreateStoragePoolVDSCommandParameters:{hostId='7f7408f3-5558-4f9f-81f8-fa5c3f10c3f9',
 storagePoolId='adf59b7a-78a1-11e9-82af-00163e729513', 
storagePoolName='Default', 
masterDomainId='e85f74bd-5e43-4a8c-8158-eb4696e041bc', 
domainsIdList='[e85f74bd-5e43-4a8c-8158-eb4696e041bc]',
 masterVersion='2'}), log id: 58aabe3d
2019-05-17 09:03:59,128-04 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] 
(default task-1) [2091c1ca] Failed in 'CreateStoragePoolVDS' method
2019-05-17 09:03:59,134-04 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-1) [2091c1ca] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM 
host-93.home.local command CreateStoragePoolVDS failed: Cannot acquire host id: 
(u'e85f74bd-5e43-4a8c-8158-eb4696e041bc', SanlockException(19, 'Sanlock 
lockspace add failure', 'No such device'))
2019-05-17 09:03:59,134-04 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] 
(default task-1) [2091c1ca] Command 
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand' return 
value 'StatusOnlyReturn [status=Status [code=661, message=Cannot acquire host 
id: (u'e85f74bd-5e43-4a8c-8158-eb4696e041bc', SanlockException(19, 'Sanlock 
lockspace add failure', 'No such device'))]]'
2019-05-17 09:03:59,134-04 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] 
(default task-1) [2091c1ca] HostName = host-93.home.local
2019-05-17 09:03:59,134-04 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] 
(default task-1) [2091c1ca] Command 'CreateStoragePoolVDSCommand(HostName = 
host-93.home.local, 

[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-05-17 Thread Simone Tiraboschi
On Fri, May 17, 2019 at 4:10 PM  wrote:

> I am having a similar problem - upgrade from 4.2.7 to 4.3.3 ... My Data
> Center would not activate, and I was getting all sorts of errors on the UI.
> I ended up shutting down the existing engine VM using --shutdown-vm ...
> trying to restart it, the console would report that the volume could not be
> found and would not start. ugh.
>
> so, ssh into another host of the cluster, same thing. grr.  --deploy goes
> through most of the settings up until what I am assuming is probably 4 on
> the UI ... errors at activating storage domain. I created a separate nfs
> share so that I can hopefully import my machines that are still limping
> along, since I havent rebooted the fileserver
>
>
> 2019-05-17 09:03:59,698-0400 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "" value: "{
> "changed": false,
> "exception": "Traceback (most recent call last):\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_4b8QQJ/__main__.py\", line 664,
> in main\nstorage_domains_module.post_create_check(sd_id)\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_4b8QQJ/__main__.py\", line 526,
> in post_create_check\nid=storage_domain.id,\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\nreturn future.wait() if wait else future\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\nreturn self._code(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\nself._check_fault(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\nself._raise_error(response
>  , body)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[]\". HTTP response code is 400.\n",
> "failed": true,
> "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\".
> HTTP response code is 400."
> }"
>
>
> 400 - Bad request?
>

I's suggest to check engine.log on the bootstrap engine VM; unfortunately
engine error responses are not always that explicit.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2PW3IWTGGU6J3325VVHBGPIVBPWFUTNN/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat 

stira...@redhat.com
@redhatjobs    redhatjobs
 @redhatjobs



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ADM3HMG56PZURMFPO5PRP5UOB6MAPLCO/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-05-17 Thread dan . midthun
I am having a similar problem - upgrade from 4.2.7 to 4.3.3 ... My Data Center 
would not activate, and I was getting all sorts of errors on the UI. I ended up 
shutting down the existing engine VM using --shutdown-vm ... trying to restart 
it, the console would report that the volume could not be found and would not 
start. ugh.

so, ssh into another host of the cluster, same thing. grr.  --deploy goes 
through most of the settings up until what I am assuming is probably 4 on the 
UI ... errors at activating storage domain. I created a separate nfs share so 
that I can hopefully import my machines that are still limping along, since I 
havent rebooted the fileserver


2019-05-17 09:03:59,698-0400 DEBUG var changed: host "localhost" var 
"otopi_storage_domain_details" type "" value: "{
"changed": false, 
"exception": "Traceback (most recent call last):\n  File 
\"/tmp/ansible_ovirt_storage_domain_payload_4b8QQJ/__main__.py\", line 664, in 
main\nstorage_domains_module.post_create_check(sd_id)\n  File 
\"/tmp/ansible_ovirt_storage_domain_payload_4b8QQJ/__main__.py\", line 526, in 
post_create_check\nid=storage_domain.id,\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in 
add\nreturn self._internal_add(storage_domain, headers, query, wait)\n  
File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in 
_internal_add\nreturn future.wait() if wait else future\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n 
   return self._code(response)\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in 
callback\nself._check_fault(response)\n  File 
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in 
_check_fault\nself._raise_error(response
 , body)\n  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", 
line 118, in _raise_error\nraise error\nError: Fault reason is \"Operation 
Failed\". Fault detail is \"[]\". HTTP response code is 400.\n", 
"failed": true, 
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP 
response code is 400."
}"


400 - Bad request? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2PW3IWTGGU6J3325VVHBGPIVBPWFUTNN/


[ovirt-users] Re: Host needs to be reinstalled after configuring power management

2019-05-17 Thread Michael Watters
Same here.  The ssh host keys on our host nodes never change but there
must be something in the ovirt database that needs to be updated.


On 5/16/19 6:20 PM, Andrew DeMaria wrote:
> That did the trick, thank you! That is really strange behavior however
> considering the fingerprint did not change..
>
> On Thu, May 16, 2019 at 7:14 AM Michael Watters  > wrote:
>
> Had the same message on our cluster.  The solution was to click
> edit on each host and refetch the ssh host key.  I'm not sure why
> this is necessary in the first place however.
>
>
> On 5/14/19 3:15 PM, Andrew DeMaria wrote:
>> Hi,
>>
>> I am running ovirt 4.3 and have found the following action item
>> immediately after configuring power management for a host:
>>
>> Host needs to be reinstalled as important configuration changes
>> were applied on it.
>>
>>
>> The thing is - I've just freshly installed this host and it seems
>> strange that I need to reinstall it.
>>  Is there a better way to install a host and configure power
>> management without having to reinstall it after?
>>
>>
>> Thanks,
>>
>> Andrew
>>
>> ___
>> Users mailing list -- users@ovirt.org 
>> To unsubscribe send an email to users-le...@ovirt.org 
>> 
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HMTDLIYVJJEPZB373P6CPXB74LIMDYZ/
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6NHGE2EJ7PYHL2S4J6JGJC2NW25EHPW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/27IMIZ6HETT2TKTHNNMF7JXK7TWC4G7I/


[ovirt-users] Re: VM pools broken in 4.3

2019-05-17 Thread Rik Theys
Hi,

Things are going from bad to worse it seems.

I've created a new VM to be used as a template and installed it with
CentOS 7. I've created a template of this VM and created a new pool
based on this template.

When I try to boot one of the VM's from the pool, it fails and logs the
following error:

2019-05-17 14:48:01,709+0200 ERROR (vm/f7da02e4) [virt.vm]
(vmId='f7da02e4-725c-4c6c-bdd4-9f2cae8b10e4') The vm start process
failed (vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in
_startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2861, in
_run
    dom = self._connection.defineXML(self._domain.xml)
  File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line
94, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in
defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed',
conn=self)
libvirtError: XML error: requested USB port 3 not present on USB bus 0
2019-05-17 14:48:01,709+0200 INFO  (vm/f7da02e4) [virt.vm]
(vmId='f7da02e4-725c-4c6c-bdd4-9f2cae8b10e4') Changed state to Down: XML
error: requested USB port 3 not present on USB bus 0 (code=1) (vm:1675)

Strange thing is that this error was not present when I created the
initial master VM.

I get similar errors when I select Q35 type VM's instead of the default.

Did your test pool have VM's with USB enabled?

Regards,

Rik

On 5/17/19 10:48 AM, Rik Theys wrote:
>
> Hi Lucie,
>
> On 5/16/19 6:27 PM, Lucie Leistnerova wrote:
>>
>> Hi Rik,
>>
>> On 5/14/19 2:21 PM, Rik Theys wrote:
>>>
>>> Hi,
>>>
>>> It seems VM pools are completely broken since our upgrade to 4.3. Is
>>> anybody else also experiencing this issue?
>>>
>> I've tried to reproduce this issue. And I can use pool VMs as
>> expected, no problem. I've tested clean install and also upgrade from
>> 4.2.8.7.
>> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
>> ovirt-web-ui-1.5.2-1.el7ev.noarch 
> That is strange. I will try to create a new pool to verify if I also
> have the problem with the new pool. Currently we are having this issue
> with two different pools. Both pools were created in August or
> September of last year. I believe it was on 4.2 but could still have
> been 4.1.
>>>
>>> Only a single instance from a pool can be used. Afterwards the pool
>>> becomes unusable due to a lock not being released. Once ovirt-engine
>>> is restarted, another (single) VM from a pool can be used.
>>>
>> What users are running the VMs? What are the permissions?
>
> The users are taking VM's from the pools using the user portal. They
> are all member of a group that has the UserRole permission on the pools.
>
>> Each VM is running by other user? Were already some VMs running
>> before the upgrade?
>
> A user can take at most 1 VM from each pool. So it's possible a user
> has two VM's running (but not from the same pool). It doesn't matter
> which user is taking a VM from the pool. Once a user has taken a VM
> from the pool, no other user can take a VM. If the user that was able
> to take a VM powers it down and tries to run a new VM, it will also fail.
>
> During the upgrade of the host, no VM's were running.
>
>> Please provide exact steps. 
>
> 1. ovirt-engine is restarted.
>
> 2. User A takes a VM from the pool and can work.
>
> 3. User B can not take a VM from that pool.
>
> 4. User A powers off the VM it was using. Once the VM is down, (s)he
> tries to take a new VM, which also fails now.
>
> It seems the VM pool is locked when the first user takes a VM and the
> lock is never released.
>
> In our case, there are no prestarted VM's. I can try to see if that
> makes a difference.
>
>
> Are there any more steps I can take to debug this issue regarding the
> locks?
>
> Regards,
>
> Rik
>
>>> I've added my findings to bug 1462236, but I'm no longer sure the
>>> issue is the same as the one initially reported.
>>>
>>> When the first VM of a pool is started:
>>>
>>> 2019-05-14 13:26:46,058+02 INFO  
>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
>>> IsVmDuringInitiatingVDSCommand( 
>>> IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
>>>  log id: 2fb4f7f5
>>> 2019-05-14 13:26:46,058+02 INFO  
>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
>>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2fb4f7f5
>>> 2019-05-14 13:26:46,208+02 INFO  [org.ovirt.engine.core.bll.VmPoolHandler] 
>>> (default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to 
>>> object 
>>> 'EngineLock:{exclusiveLocks='[d8a99676-d520-425e-9974-1b1efe6da8a5=VM]', 
>>> 

[ovirt-users] Re: deprecating export domain?

2019-05-17 Thread Andreas Elvers
You're so right. Thanks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KIT4FV6ATTRGEEDTU75PDU556NRGX3TO/


[ovirt-users] Re: ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-17 Thread Andreas Elvers
Yeah. But I think this ist just an artefact of the current version. All images 
are in sync.
 dom_md/ids is an obsolete file anyway as the docs say.

see vdsm/block-storage-domains
https://www.ovirt.org/develop/developer-guide/vdsm/block-storage-domains.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MVQXFPVRJCHGL4YJ3KAJQLXXBTGJ6LNO/


[ovirt-users] Re: VM pools broken in 4.3

2019-05-17 Thread Rik Theys
Hi Gianluca,

We are not using gluster, but FC storage.

All VM's from the pool are created from a template.

Regards,

Rik

On 5/16/19 6:48 PM, Gianluca Cecchi wrote:
> On Thu, May 16, 2019 at 6:32 PM Lucie Leistnerova  > wrote:
>
> Hi Rik,
>
> On 5/14/19 2:21 PM, Rik Theys wrote:
>>
>> Hi,
>>
>> It seems VM pools are completely broken since our upgrade to 4.3.
>> Is anybody else also experiencing this issue?
>>
> I've tried to reproduce this issue. And I can use pool VMs as
> expected, no problem. I've tested clean install and also upgrade
> from 4.2.8.7.
> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
> ovirt-web-ui-1.5.2-1.el7ev.noarch
>>
>> Only a single instance from a pool can be used. Afterwards the
>> pool becomes unusable due to a lock not being released. Once
>> ovirt-engine is restarted, another (single) VM from a pool can be
>> used.
>>
> What users are running the VMs? What are the permissions?
> Each VM is running by other user? Were already some VMs running
> before the upgrade?
> Please provide exact steps.
>>
>>
> Hi, just an idea... could it be related in any way with disks always
> created as preallocated problems reported by users using gluster as
> backend storage?
> What kind of storage domains are you using Rik?
>
> Gianluca 

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7YU4VIWOFBN4MB4FDOHQCUIUSEHNW2TV/


[ovirt-users] Re: VM pools broken in 4.3

2019-05-17 Thread Rik Theys
Hi Lucie,

On 5/16/19 6:27 PM, Lucie Leistnerova wrote:
>
> Hi Rik,
>
> On 5/14/19 2:21 PM, Rik Theys wrote:
>>
>> Hi,
>>
>> It seems VM pools are completely broken since our upgrade to 4.3. Is
>> anybody else also experiencing this issue?
>>
> I've tried to reproduce this issue. And I can use pool VMs as
> expected, no problem. I've tested clean install and also upgrade from
> 4.2.8.7.
> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
> ovirt-web-ui-1.5.2-1.el7ev.noarch 
That is strange. I will try to create a new pool to verify if I also
have the problem with the new pool. Currently we are having this issue
with two different pools. Both pools were created in August or September
of last year. I believe it was on 4.2 but could still have been 4.1.
>>
>> Only a single instance from a pool can be used. Afterwards the pool
>> becomes unusable due to a lock not being released. Once ovirt-engine
>> is restarted, another (single) VM from a pool can be used.
>>
> What users are running the VMs? What are the permissions?

The users are taking VM's from the pools using the user portal. They are
all member of a group that has the UserRole permission on the pools.

> Each VM is running by other user? Were already some VMs running before
> the upgrade?

A user can take at most 1 VM from each pool. So it's possible a user has
two VM's running (but not from the same pool). It doesn't matter which
user is taking a VM from the pool. Once a user has taken a VM from the
pool, no other user can take a VM. If the user that was able to take a
VM powers it down and tries to run a new VM, it will also fail.

During the upgrade of the host, no VM's were running.

> Please provide exact steps. 

1. ovirt-engine is restarted.

2. User A takes a VM from the pool and can work.

3. User B can not take a VM from that pool.

4. User A powers off the VM it was using. Once the VM is down, (s)he
tries to take a new VM, which also fails now.

It seems the VM pool is locked when the first user takes a VM and the
lock is never released.

In our case, there are no prestarted VM's. I can try to see if that
makes a difference.


Are there any more steps I can take to debug this issue regarding the locks?

Regards,

Rik

>> I've added my findings to bug 1462236, but I'm no longer sure the
>> issue is the same as the one initially reported.
>>
>> When the first VM of a pool is started:
>>
>> 2019-05-14 13:26:46,058+02 INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
>> IsVmDuringInitiatingVDSCommand( 
>> IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
>>  log id: 2fb4f7f5
>> 2019-05-14 13:26:46,058+02 INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2fb4f7f5
>> 2019-05-14 13:26:46,208+02 INFO  [org.ovirt.engine.core.bll.VmPoolHandler] 
>> (default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to 
>> object 
>> 'EngineLock:{exclusiveLocks='[d8a99676-d520-425e-9974-1b1efe6da8a5=VM]', 
>> sharedLocks=''}'
>>
>> -> it has acquired a lock (lock1)
>>
>> 2019-05-14 13:26:46,247+02 INFO  
>> [org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to object 
>> 'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
>>  sharedLocks=''}'
>>
>> -> it has acquired another lock (lock2)
>>
>> 2019-05-14 13:26:46,352+02 INFO  
>> [org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Running command: 
>> AttachUserToVmFromPoolAndRunCommand internal: false. Entities affected :  
>> ID: 4c622213-e5f4-4032-8639-643174b698cc Type: VmPoolAction group 
>> VM_POOL_BASIC_OPERATIONS with role type USER
>> 2019-05-14 13:26:46,393+02 INFO  
>> [org.ovirt.engine.core.bll.AddPermissionCommand] (default task-6) 
>> [e3c5745c-e593-4aed-ba67-b173808140e8] Running command: AddPermissionCommand 
>> internal: true. Entities affected :  ID: 
>> d8a99676-d520-425e-9974-1b1efe6da8a5 Type: VMAction group 
>> MANIPULATE_PERMISSIONS with role type USER
>> 2019-05-14 13:26:46,433+02 INFO  
>> [org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Succeeded giving user 
>> 'a5bed59c-d2fe-4fe4-bff7-52efe089ebd6' permission to Vm 
>> 'd8a99676-d520-425e-9974-1b1efe6da8a5'
>> 2019-05-14 13:26:46,608+02 INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
>> IsVmDuringInitiatingVDSCommand( 
>> IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
>>  log id: 67acc561
>> 2019-05-14 13:26:46,608+02 INFO  
>> 

[ovirt-users] Re: Dropped RX Packets

2019-05-17 Thread Karli Sjöberg
Den 16 maj 2019 19:17 skrev Magnus Isaksson :Hello@strahil The packet drops are frequent, every time i run "ip -s link" on the guest there is new dropped packets, on the hosts it says "0" and in oVirt it says "0".I can run tcpdump on hosts and guests, but i don't know how to capture the dropped packets with tcpdump.There are no RX or TX errors anywhere, not on hosts, guests or switches.The connection drops are completely random, sometimes after a few minutes and sometimes after a couple of hours, really hard to narrow down, but this may be some errors in our customers network, they are investigating it now, so i will come back with that issue is it still persists.@OliverI tried this, unfortunately still same result, still dropping packets.@DarellI tried increasing the RX and TX buffer on the hosts, but the guests still drop packets.I am using dual 10G, setup in Active-Backup going to two switches, but the second switch is now turned off during the testing to narrow this down.I have experienced these types of issues, you know, in general on any platform, when frame sizes don't quite match up. From NIC to switch to router- the whole chain from you to your customer- is that investigated at all?Another thing to check might be the VM NIC "hardware", whether the issue goes away or changes, if you switch to an "e1000" instead of "virtio", or vice versa?/KRegards Magnus___Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-le...@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TISY75JZ34AJAVV3B233WLCJ5PFBZRRL/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G3K7FGH5U557O6743Y5NIVJLLTLRYBNJ/