[ovirt-users] Slow ova export performance

2020-07-15 Thread francesco--- via Users
Hi All,

I'm facing a really slow export ov vms hosted on a single node cluster, in a 
local storage. The Vm disk is 600 GB and the effective usage is around 300 GB. 
I estimated that the following process would take up about 15 hours to end:

vdsm 25338 25332 99 04:14 pts/007:40:09 qemu-img measure -O qcow2 
/rhev/data-center/mnt/_data/6775c41c-7d67-451b-8beb-4fd086eade2e/images/a084fa36-0f93-45c2-a323-ea9ca2d16677/55b3eac5-05b2-4bae-be50-37cde7050697

A strace -p of the pid shows a slow progression to reach the effective size.

lseek(11, 3056795648, SEEK_DATA)= 3056795648
lseek(11, 3056795648, SEEK_HOLE)= 13407092736
lseek(14, 128637468672, SEEK_DATA)  = 128637468672
lseek(14, 128637468672, SEEK_HOLE)  = 317708828672
lseek(14, 128646250496, SEEK_DATA)  = 128646250496
lseek(14, 128646250496, SEEK_HOLE)  = 317708828672
lseek(14, 128637730816, SEEK_DATA)  = 128637730816
lseek(14, 128637730816, SEEK_HOLE)  = 317708828672
lseek(14, 128646774784, SEEK_DATA)  = 128646774784
lseek(14, 128646774784, SEEK_HOLE)  = 317708828672
lseek(14, 128646709248, SEEK_DATA)  = 128646709248

The process take a single full core, but i don't think this is the problem. The 
I/O is almost nothing.

Any idea/suggestion?

Thank you for your time
Regards
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QF2QIA4ZRQIE6HQNJSNRBCPM25I3O5D3/


[ovirt-users] Fail install SHE ovirt-engine from backupfile (4.3 -> 4.4)

2020-09-21 Thread francesco--- via Users
Hi Everyone,

In a test environment I'm trying to deploy a single node self hosted engine 4.4 
on CentOS 8 from a 4.3 backup. The actual setup is:
- node1 with CentOS7, oVirt 4.3 with a working SH engine. The data domain is a 
local NFS;
- node2 with CentOS8, where we are triyng to deploy the engine starting from 
the node1 engine backup
- host1, with CentOS78, running a couple of VMs (4.3)

I'm following the guide: 
https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_to_4-4_4-3_SHE
Everything seems working fine, the engine on the node1 is in maintenance:global 
mode and the ovirt-engine service i stopped. The deploy on the node2 stucks in 
the following error:

TASK [ovirt.hosted_engine_setup : Wait for OVF_STORE disk content]

[ ERROR ] {'msg': 'non-zero return code', 'cmd': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db 
volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321 | grep path | awk '{ print $2 }' 
| xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '', 'stderr': 
"vdsm-client: Command Image.prepare with args {'storagepoolID': 
'06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': 
'2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db',
'volumeID': '06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:\n(code=309, 
message=Unknown pool id, pool not connected: 
('06c58622-f99b-11ea-9122-00163e1bbc93',))\ntar: This does not look like a tar 
archive\ntar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in 
archive\ntar: Exiting with failure status due to previous errors", 'rc': 2, 
'start': '2020-09-21 17:14:17.293090', 'end': '2020-09-21 17:14:17.644253', 
'delta': '0:00:00.351163', 'changed': True, 'failed': True, 'invocation': 
{'module_args': {'warn': False, '_raw_params': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db 
volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321 | grep path | awk '{ print $2 }' 
| xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", '_uses_shell': True, 
'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': 
None, 'executable
 ': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': 
[], 'stderr_lines': ["vdsm-client: Command Image.prepare with args 
{'storagepoolID': '06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': 
'2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db', 'volumeID': 
'06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:", "(code=309, message=Unknown 
pool id, pool not connected: ('06c58622-f99b-11ea-9122-00163e1bbc93',))", 'tar: 
This does not look like a tar archive', 'tar: 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive', 'tar: Exiting 
with failure status due to previous errors'], '_ansible_no_log': False, 
'attempts':
12, 'item': {'name': 'OVF_STORE', 'image_id': 
'06bb5f34-112d-4214-91d2-53d0bdb84321', 'id': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}, 'ansible_loop_var': 'item', 
'_ansible_item_label': {'name': 'OVF_STORE', 'image_id': 
'06bb5f34-112d-4214-91d2-53d0bdb84321', 'id': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}}
[ ERROR ] {'msg': 'non-zero return code', 'cmd': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=750428bd-1273-467f-9b27-7f6fe58a446c 
volumeID=1c89c678-f883-4e61-945c-5f7321add343 | grep path | awk '{ print $2 }' 
| xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '', 'stderr': 
"vdsm-client: Command Image.prepare with args {'storagepoolID': 
'06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': 
'2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': 
'750428bd-1273-467f-9b27-7f6fe58a446c',
'volumeID': '1c89c678-f883-4e61-945c-5f7321add343'} failed:\n(code=309, 
message=Unknown pool id, pool not connected: 
('06c58622-f99b-11ea-9122-00163e1bbc93',))\ntar: This does not look like a tar 
archive\ntar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in 
archive\ntar: Exiting with failure status due to previous errors", 'rc': 2, 
'start': '2020-09-21 17:16:26.030343', 'end': '2020-09-21 17:16:26.381862', 
'delta': '0:00:00.351519', 'changed': True, 'failed': True, 'invocation': 
{'module_args': {'warn': False, '_raw_params': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=750428bd-1273-467f-9b27-7f6fe58a446c 
volumeID=1c89c678-f883-4e61-945c-5f7321add343 | grep path | awk '{ print $2 }' 
| xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", '_uses_shell': True, 
'stdin_ad

[ovirt-users] Re: Fail install SHE ovirt-engine from backupfile (4.3 -> 4.4)

2020-09-22 Thread Francesco via Users

Ok, solved.

Simply the server node2 could not mount via NFS the data domain of the 
node 1. Added node1 in the node2 firewall and in /etc/exports, tested 
and everything went fine.


Regards,
Francesco

Il 21/09/2020 17:44, francesco--- via Users ha scritto:

Hi Everyone,

In a test environment I'm trying to deploy a single node self hosted engine 4.4 
on CentOS 8 from a 4.3 backup. The actual setup is:
- node1 with CentOS7, oVirt 4.3 with a working SH engine. The data domain is a 
local NFS;
- node2 with CentOS8, where we are triyng to deploy the engine starting from 
the node1 engine backup
- host1, with CentOS78, running a couple of VMs (4.3)

I'm following the guide: 
https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_to_4-4_4-3_SHE
Everything seems working fine, the engine on the node1 is in maintenance:global 
mode and the ovirt-engine service i stopped. The deploy on the node2 stucks in 
the following error:

TASK [ovirt.hosted_engine_setup : Wait for OVF_STORE disk content]

[ ERROR ] {'msg': 'non-zero return code', 'cmd': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321 | 
grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '', 'stderr': "vdsm-client: 
Command Image.prepare with args {'storagepoolID': '06c58622-f99b-11ea-9122-00163e1bbc93', 
'storagedomainID': '2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db',
'volumeID': '06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:\n(code=309, message=Unknown pool 
id, pool not connected: ('06c58622-f99b-11ea-9122-00163e1bbc93',))\ntar: This does not look 
like a tar archive\ntar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive\ntar: 
Exiting with failure status due to previous errors", 'rc': 2, 'start': '2020-09-21 
17:14:17.293090', 'end': '2020-09-21 17:14:17.644253', 'delta': '0:00:00.351163', 'changed': 
True, 'failed': True, 'invocation': {'module_args': {'warn': False, '_raw_params': 
"vdsm-client Image prepare storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321 | 
grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", '_uses_shell': True, 'stdin_add_newline': 
True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable
  ': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': 
["vdsm-client: Command Image.prepare with args {'storagepoolID': 
'06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': '2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 
'imageID': 'e48a66dd-74c9-43eb-890e-778e9c4ee8db', 'volumeID': 
'06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:", "(code=309, message=Unknown pool id, 
pool not connected: ('06c58622-f99b-11ea-9122-00163e1bbc93',))", 'tar: This does not look like 
a tar archive', 'tar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive', 'tar: 
Exiting with failure status due to previous errors'], '_ansible_no_log': False, 'attempts':
12, 'item': {'name': 'OVF_STORE', 'image_id': 
'06bb5f34-112d-4214-91d2-53d0bdb84321', 'id': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}, 'ansible_loop_var': 'item', 
'_ansible_item_label': {'name': 'OVF_STORE', 'image_id': 
'06bb5f34-112d-4214-91d2-53d0bdb84321', 'id': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}}
[ ERROR ] {'msg': 'non-zero return code', 'cmd': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=750428bd-1273-467f-9b27-7f6fe58a446c volumeID=1c89c678-f883-4e61-945c-5f7321add343 | 
grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '', 'stderr': "vdsm-client: 
Command Image.prepare 

[ovirt-users] Can't find storage server connection

2020-11-19 Thread francesco--- via Users
Hi all,

I'm using oVirt SDK python for retrieving info about storage domain, in several 
hosts (centos7/ovirt4.3 and centos8/ovirt4.4), but the script exits with the 
following error in some of them:

Traceback (most recent call last):
  File "get_uuid.py", line 70, in 
storage_domain = sds_service.list(search='name=data-foo')[0]
  File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/services.py", line 
26296, in list
return self._internal_get(headers, query, wait)
  File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
211, in _internal_get
return future.wait() if wait else future
  File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
55, in wait
return self._code(response)
  File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
208, in callback
self._check_fault(response)
  File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
132, in _check_fault
self._raise_error(response, body)
  File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
118, in _raise_error
raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "Can't 
find storage server connection for id '92444a95-0be7-4589-ac46-1ed6dfe7ed4c'.". 
HTTP response code is 500.

The portion of the script that search for the storage domain is the following:

sds_service = connection.system_service().storage_domains_service()
storage_domain = 
sds_service.list(search='name={}'.format(storage_domain_name))[0]

Now: I have no real clue on which ID "92444a95-0be7-4589-ac46-1ed6dfe7ed4c'" 
but digging in the engine logs it refers to StorageServerConnections ID:

[root@ovirt-engine ovirt-engine]# zgrep 92444a95-0be7-4589-ac46-1ed6dfe7ed4c 
*.gz
engine.log-20201108.gz:2020-11-07 06:05:54,352+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-79) 
[3ffb810c] START, ConnectStorageServerVDSCommand(HostName = 
another-server.foo.com, 
StorageServerConnectionManagementVDSParameters:{hostId='7d202bc7-002b-4426-8446-99b6b346874e',
 storagePoolId='82d0b3de-0334-451c-8321-c3533de9a894', storageType='LOCALFS', 
connectionList='[StorageServerConnections:{id='92444a95-0be7-4589-ac46-1ed6dfe7ed4c',
 connection='/data', iqn='null', vfsType='null', mountOptions='null', 
nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', 
netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 1509fe17


As I said, I tried to execute the script from several hosts: some of them with 
oVirt 4.3 and other with oVirt 4.4 but it can run, or not, on both of versions.

When I try to manage the storage domain via ovirt-engine GUI on the hosts that 
the script exit with the mentioned error, I recieve the following error:

Uncaught exception occurred. Please try reloading the page. Details: 
(TypeError) : Cannot read property 'a' of null
Please have your administrator check the UI logs


Any slightest idea on what is going on?

Thank you for your time and help,
Francesco
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYKKYPXYUZALYBJ3YBRANW5RBVVFWRBJ/


[ovirt-users] Re: Can't find storage server connection

2020-11-23 Thread Francesco via Users

A tiny little "up" because is driving me crazy

Francesco

Il 19/11/2020 10:57, francesco--- via Users ha scritto:

Hi all,

I'm using oVirt SDK python for retrieving info about storage domain, in several 
hosts (centos7/ovirt4.3 and centos8/ovirt4.4), but the script exits with the 
following error in some of them:

Traceback (most recent call last):
   File "get_uuid.py", line 70, in 
 storage_domain = sds_service.list(search='name=data-foo')[0]
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/services.py", line 
26296, in list
 return self._internal_get(headers, query, wait)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
211, in _internal_get
 return future.wait() if wait else future
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
55, in wait
 return self._code(response)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
208, in callback
 self._check_fault(response)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
132, in _check_fault
 self._raise_error(response, body)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
118, in _raise_error
 raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "Can't find 
storage server connection for id '92444a95-0be7-4589-ac46-1ed6dfe7ed4c'.". HTTP response code 
is 500.

The portion of the script that search for the storage domain is the following:

sds_service = connection.system_service().storage_domains_service()
storage_domain = 
sds_service.list(search='name={}'.format(storage_domain_name))[0]

Now: I have no real clue on which ID "92444a95-0be7-4589-ac46-1ed6dfe7ed4c'" 
but digging in the engine logs it refers to StorageServerConnections ID:

[root@ovirt-engine ovirt-engine]# zgrep 92444a95-0be7-4589-ac46-1ed6dfe7ed4c 
*.gz
engine.log-20201108.gz:2020-11-07 06:05:54,352+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-79) 
[3ffb810c] START, ConnectStorageServerVDSCommand(HostName = 
another-server.foo.com, 
StorageServerConnectionManagementVDSParameters:{hostId='7d202bc7-002b-4426-8446-99b6b346874e',
 storagePoolId='82d0b3de-0334-451c-8321-c3533de9a894', storageType='LOCALFS', 
connectionList='[StorageServerConnections:{id='92444a95-0be7-4589-ac46-1ed6dfe7ed4c',
 connection='/data', iqn='null', vfsType='null', mountOptions='null', 
nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', 
netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 1509fe17


As I said, I tried to execute the script from several hosts: some of them with 
oVirt 4.3 and other with oVirt 4.4 but it can run, or not, on both of versions.

When I try to manage the storage domain via ovirt-engine GUI on the hosts that 
the script exit with the mentioned error, I recieve the following error:

Uncaught exception occurred. Please try reloading the page. Details: 
(TypeError) : Cannot read property 'a' of null
Please have your administrator check the UI logs


Any slightest idea on what is going on?

Thank you for your time and help,
Francesco
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYKKYPXYUZALYBJ3YBRANW5RBVVFWRBJ/


--
--  
Shellrent - Il primo hosting italiano Security First

*Francesco Lorenzini*
/System Administrator & DevOps Engineer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DZC5SDQOOPYSI46EPSFPLH6MBJQTGAVP/


[ovirt-users] Re: Can't find storage server connection

2020-11-23 Thread Francesco via Users

I try execute the script on different host passing different data storage:

- let's say that on host foo-host1, executing the script querying for 
its own data storage named foo-data1 works correctly;
- and let's say that on host foo-host2, the same script querying for its 
own data storage foo-data2 return the mentioned error;


If I execute the script on foo-host2 querying the foo-data1 domain data, 
I got the error. If I execute on foo-host1  querying foo-data2, I still 
got the error.


The ui.log engine when I try to manage the storage reports the following 
error:


2020-11-23 09:59:31,934+01 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] 
(default task-239) [] Permutation name: 68A62BEB12822F65FE66B14A9E16480A
2020-11-23 09:59:31,934+01 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] 
(default task-239) [] Uncaught exception: 
com.google.gwt.core.client.JavaScriptException: (TypeError) : Cannot 
read property 'a' of null
    at 
org.ovirt.engine.ui.uicommonweb.models.storage.FileStorageModel.$lambda$0(FileStorageModel.java:34)
    at 
org.ovirt.engine.ui.uicommonweb.models.storage.FileStorageModel$lambda$0$Type.onSuccess(FileStorageModel.java:34)
    at 
org.ovirt.engine.ui.frontend.Frontend$1.$onSuccess(Frontend.java:227)
    at 
org.ovirt.engine.ui.frontend.Frontend$1.onSuccess(Frontend.java:227)
    at 
org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.$onSuccess(OperationProcessor.java:133)
    at 
org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.onSuccess(OperationProcessor.java:133)
    at 
org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:270)
    at 
org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:270)
    at 
com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198)
    at 
com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233)
    at 
com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)

    at Unknown.eval(webadmin-0.js)
    at com.google.gwt.core.client.impl.Impl.apply(Impl.java:306)
    at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:345)
    at Unknown.eval(webadmin-0.js)

Il 23/11/2020 09:50, Francesco via Users ha scritto:

A tiny little "up" because is driving me crazy

Francesco

Il 19/11/2020 10:57, francesco--- via Users ha scritto:

Hi all,

I'm using oVirt SDK python for retrieving info about storage domain, in several 
hosts (centos7/ovirt4.3 and centos8/ovirt4.4), but the script exits with the 
following error in some of them:

Traceback (most recent call last):
   File "get_uuid.py", line 70, in 
 storage_domain = sds_service.list(search='name=data-foo')[0]
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/services.py", line 
26296, in list
 return self._internal_get(headers, query, wait)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
211, in _internal_get
 return future.wait() if wait else future
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
55, in wait
 return self._code(response)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
208, in callback
 self._check_fault(response)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
132, in _check_fault
 self._raise_error(response, body)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
118, in _raise_error
 raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "Can't find 
storage server connection for id '92444a95-0be7-4589-ac46-1ed6dfe7ed4c'.". HTTP response code 
is 500.

The portion of the script that search for the storage domain is the following:

sds_service = connection.system_service().storage_domains_service()
storage_domain = 
sds_service.list(search='name={}'.format(storage_domain_name))[0]

Now: I have no real clue on which ID "92444a95-0be7-4589-ac46-1ed6dfe7ed4c'" 
but digging in the engine logs it refers to StorageServerConnections ID:

[root@ovirt-engine ovirt-engine]# zgrep 92444a95-0be7-4589-ac46-1ed6dfe7ed4c 
*.gz
engine.log-20201108.gz:2020-11-07 06:05:54,352+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-79) 
[3ffb810c] START, ConnectStorageServerVDSCommand(HostName = 
another-server.foo.com, 
StorageServerConnectionManagementVDSParameters:{hostId='7d202bc7-002b-4426-8446-99b6b346874e',
 storagePoolId='82d0b

[ovirt-users] Re: Can't find storage server connection

2020-11-23 Thread Francesco via Users

I try execute the script on different host passing different data storage:

- let's say that on host foo-host1, executing the script querying for 
its own data storage named foo-data1 works correctly;
- and let's say that on host foo-host2, the same script querying for its 
own data storage foo-data2 return the mentioned error;


If I execute the script on foo-host2 querying the foo-data1 domain data, 
I got the error. If I execute on foo-host1  querying foo-data2, I still 
got the error.


The ui.log engine when I try to manage the storage reports the following 
error:


2020-11-23 09:59:31,934+01 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] 
(default task-239) [] Permutation name: 68A62BEB12822F65FE66B14A9E16480A
2020-11-23 09:59:31,934+01 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] 
(default task-239) [] Uncaught exception: 
com.google.gwt.core.client.JavaScriptException: (TypeError) : Cannot 
read property 'a' of null
    at 
org.ovirt.engine.ui.uicommonweb.models.storage.FileStorageModel.$lambda$0(FileStorageModel.java:34)
    at 
org.ovirt.engine.ui.uicommonweb.models.storage.FileStorageModel$lambda$0$Type.onSuccess(FileStorageModel.java:34)
    at 
org.ovirt.engine.ui.frontend.Frontend$1.$onSuccess(Frontend.java:227)
    at 
org.ovirt.engine.ui.frontend.Frontend$1.onSuccess(Frontend.java:227)
    at 
org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.$onSuccess(OperationProcessor.java:133)
    at 
org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.onSuccess(OperationProcessor.java:133)
    at 
org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:270)
    at 
org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:270)
    at 
com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198)
    at 
com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233)
    at 
com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)

    at Unknown.eval(webadmin-0.js)
    at com.google.gwt.core.client.impl.Impl.apply(Impl.java:306)
    at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:345)
    at Unknown.eval(webadmin-0.js)

Il 23/11/2020 09:50, Francesco via Users ha scritto:

A tiny little "up" because is driving me crazy

Francesco

Il 19/11/2020 10:57, francesco--- via Users ha scritto:

Hi all,

I'm using oVirt SDK python for retrieving info about storage domain, in several 
hosts (centos7/ovirt4.3 and centos8/ovirt4.4), but the script exits with the 
following error in some of them:

Traceback (most recent call last):
   File "get_uuid.py", line 70, in 
 storage_domain = sds_service.list(search='name=data-foo')[0]
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/services.py", line 
26296, in list
 return self._internal_get(headers, query, wait)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
211, in _internal_get
 return future.wait() if wait else future
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
55, in wait
 return self._code(response)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
208, in callback
 self._check_fault(response)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
132, in _check_fault
 self._raise_error(response, body)
   File "/root/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 
118, in _raise_error
 raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "Can't find 
storage server connection for id '92444a95-0be7-4589-ac46-1ed6dfe7ed4c'.". HTTP response code 
is 500.

The portion of the script that search for the storage domain is the following:

sds_service = connection.system_service().storage_domains_service()
storage_domain = 
sds_service.list(search='name={}'.format(storage_domain_name))[0]

Now: I have no real clue on which ID "92444a95-0be7-4589-ac46-1ed6dfe7ed4c'" 
but digging in the engine logs it refers to StorageServerConnections ID:

[root@ovirt-engine ovirt-engine]# zgrep 92444a95-0be7-4589-ac46-1ed6dfe7ed4c 
*.gz
engine.log-20201108.gz:2020-11-07 06:05:54,352+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-79) 
[3ffb810c] START, ConnectStorageServerVDSCommand(HostName = 
another-server.foo.com, 
StorageServerConnectionManagementVDSParameters:{hostId='7d202bc7-002b-4426-8446-99b6b346874e',
 storagePoolId='82d0b

[ovirt-users] Random Crash

2021-02-05 Thread francesco--- via Users
Hi all,

I'm experiencing random reboot on several oVirt nodes (CentOS 7/8, oVirt 
4.3/4.4 as well). Sometimes it happens three times in a day, and the more hosts 
I'm adding to my pool, the more I noticing.

The logs are not helpful: it's like a brute poweroff cause there are no entries 
at all in the messages, vdsm, secure (I looked all over the logs) from the last 
"normal" entry (user logged in/off, normal vdsm log ecc.) until the first entry 
of the boot. kdump is enabled and /var/crash is empty. I used to run Xen on the 
servers of the same provider and I didn't have all of these frequent reboots, 
that's why I'm not sure it is a hardware related issue.

Any advice on what enables for getting more info about this crash?

Thank you for your time,
Francesco
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G3DOP7A7SREBYQ5IY24HBE4GYCKM6QH7/


[ovirt-users] Re: Random Crash

2021-02-08 Thread Francesco via Users

Hi Strahil,

I have the Power Management setting disabled on all the hosts, so I 
doubt it's a fencing issue related, but thank you for the suggestion. 
The only logs that I see in the engine is the "set non responsive status".


Francesco

Il 06/02/2021 06:20, Strahil Nikolov via Users ha scritto:

My first guess would be fencing.
Fencing kicks in when there are network issues or when the Hypervisor 
is stuck.


Check the engine's logs to verify that guess.

Best Regards,
Strahil Nikolov

Sent from Yahoo Mail on Android 
<https://go.onelink.me/107872968?pid=InProduct&c=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers&af_wl=ym&af_sub1=Internal&af_sub2=Global_YGrowth&af_sub3=EmailSignature>


    On Fri, Feb 5, 2021 at 11:50, francesco--- via Users
 wrote:
Hi all,

I'm experiencing random reboot on several oVirt nodes (CentOS 7/8,
oVirt 4.3/4.4 as well). Sometimes it happens three times in a day,
and the more hosts I'm adding to my pool, the more I noticing.

The logs are not helpful: it's like a brute poweroff cause there
are no entries at all in the messages, vdsm, secure (I looked all
over the logs) from the last "normal" entry (user logged in/off,
normal vdsm log ecc.) until the first entry of the boot. kdump is
enabled and /var/crash is empty. I used to run Xen on the servers
of the same provider and I didn't have all of these frequent
reboots, that's why I'm not sure it is a hardware related issue.

Any advice on what enables for getting more info about this crash?

Thank you for your time,
Francesco
___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
<https://www.ovirt.org/privacy-policy.html>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/G3DOP7A7SREBYQ5IY24HBE4GYCKM6QH7/

<https://lists.ovirt.org/archives/list/users@ovirt.org/message/G3DOP7A7SREBYQ5IY24HBE4GYCKM6QH7/>


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNBORTK5GAOFT3BKHZASVVCQJCY2GPVB/


--
--  
Shellrent - Il primo hosting italiano Security First

*Francesco Lorenzini*
/System Administrator & DevOps Engineer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UB7ASXNGSV4HBUQA2VXNZ4REMQYXA2PI/


[ovirt-users] Importing VM from Xen Server 7.1

2021-08-27 Thread francesco--- via Users
Hi all,

resuming the "dead" thread  "Importing VM from Xen Server 6.5" 
(https://lists.ovirt.org/pipermail/users/2016-August/075213.html) I'm trying to 
import via GUI a VM from Xen Server 7.1 in a host Centos 8.4, oVirt 4.4.

Created the SSH key for vdsm user, added the IP in the host target firewall, an 
ssh connection works, installed netcat on the target host, but like the 
original thread, when i execute the "load" command for getting the VMS list, I 
get the following error in the host and in the engine:

Aug 27 12:04:43 centos8-4.host vdsm[78126]: ERROR error connecting to 
hypervisor#012Traceback (most recent call last):#012  File 
"/usr/lib/python3.6/site-packages/vdsm/v2v.py", line 193, in 
get_external_vm_names#012passwd=password)#012  File 
"/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 107, 
in open_connection#012return function.retry(libvirtOpen, timeout=10, 
sleep=0.2)#012  File 
"/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 58, in 
retry#012return func()#012  File 
"/usr/lib64/python3.6/site-packages/libvirt.py", line 148, in openAuth#012
raise libvirtError('virConnectOpenAuth() failed')#012libvirt.libvirtError: End 
of file while reading data: Ncat: No such file or directory.: Input/output error

Trying the command virsh -c xen+ssh://root@xen7.target the same error:

error: failed to connect to the hypervisor
error: End of file while reading data: Ncat: No such file or directory.: 
Input/output error

Any ideas?

Thank you for your time and help.

Francesco

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QR2B62PLDI2XO7WLTVTBG3MZRDC4RQ2Q/


[ovirt-users] Missing snapshot in the engine

2021-11-08 Thread francesco--- via Users
Hi,

I have an issue with a VM (Windows Server 2016), running on Centos8, oVirt host 
4.4.8, oVirt engine 4.4.5. I used to perform regular snapshot (deleting the 
previous one) on this VM but starting from 25/10 the task fail with the errors 
that I'll attach at the bottom. The volume ID mentioned in the error... :

[...] vdsm.storage.exception.prepareIllegalVolumeError: Cannot prepare illegal 
volume: ('5cb3fe58-3e01-4d32-bc7c-5907a4f858a8',) [...]

... refers to a snapshot's volume, because the ID of the current volume is 
different and smaller that one in the engine UI with ID 
5aad30c7-96f0-433d-95c8-2317e5f80045:

[root@ovirt-host44 4d79c1da-34f0-44e3-8b92-c4bcb8524d83]# ls -lh
total 163G
-rw-rw 1 vdsm kvm 154G Nov  8 10:32 5aad30c7-96f0-433d-95c8-2317e5f80045
-rw-rw 1 vdsm kvm 1.0M Aug 31 11:49 
5aad30c7-96f0-433d-95c8-2317e5f80045.lease
-rw-r--r-- 1 vdsm kvm  360 Nov  8 10:19 
5aad30c7-96f0-433d-95c8-2317e5f80045.meta
-rw-rw 1 vdsm kvm 8.2G Oct 25 05:16 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8
-rw-rw 1 vdsm kvm 1.0M Oct 23 05:15 
5cb3fe58-3e01-4d32-bc7c-5907a4f858a8.lease
-rw-r--r-- 1 vdsm kvm  254 Oct 25 05:16 
5cb3fe58-3e01-4d32-bc7c-5907a4f858a8.meta


It seems that the last working snapshot performend on 25/10 was not completely 
deleted and now is used as the base from a new snapshot on the host side, but 
is not listed on the engine.

Any idea? I should manually merge the snapsot on the host side? If yes, any 
indications on that?

Thank you for your time,
Francesco



--- Engine log during snapshot removal:



2021-11-08 10:19:25,751+01 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (default 
task-63) [469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[f1d56493-b5e0-480f-87a3-5e7f373712fa=VM]', 
sharedLocks=''}'
2021-11-08 10:19:26,306+01 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-49) 
[469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] Running command: 
CreateSnapshotForVmCommand internal: false. Entities affected :  ID: 
f1d56493-b5e0-480f-87a3-5e7f373712fa Type: VMAction group 
MANIPULATE_VM_SNAPSHOTS with role type USER
2021-11-08 10:19:26,383+01 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand] 
(EE-ManagedThreadFactory-engine-Thread-49) 
[469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] Running command: 
CreateSnapshotDiskCommand internal: true. Entities affected :  ID: 
f1d56493-b5e0-480f-87a3-5e7f373712fa Type: VMAction group 
MANIPULATE_VM_SNAPSHOTS with role type USER
2021-11-08 10:19:26,503+01 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] 
(EE-ManagedThreadFactory-engine-Thread-49) 
[469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] Running command: CreateSnapshotCommand 
internal: true. Entities affected :  ID: ---- 
Type: Storage
2021-11-08 10:19:26,616+01 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-49) 
[469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] START, CreateVolumeVDSCommand( 
CreateVolumeVDSCommandParameters:{storagePoolId='609ff8db-09c5-435b-b2e5-023d57003138',
 ignoreFailoverLimit='false', 
storageDomainId='e25db7d0-060a-4046-94b5-235f38097cd8', 
imageGroupId='4d79c1da-34f0-44e3-8b92-c4bcb8524d83', 
imageSizeInBytes='214748364800', volumeFormat='COW', 
newImageId='74e7188d-3727-4ed6-a2e5-dfa73b9e7da3', imageType='Sparse', 
newImageDescription='', imageInitialSizeInBytes='0', 
imageId='5aad30c7-96f0-433d-95c8-2317e5f80045', 
sourceImageGroupId='4d79c1da-34f0-44e3-8b92-c4bcb8524d83', 
shouldAddBitmaps='false'}), log id: 514e7f02
2021-11-08 10:19:26,768+01 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-49) 
[469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] FINISH, CreateVolumeVDSCommand, return: 
74e7188d-3727-4ed6-a2e5-dfa73b9e7da3, log id: 514e7f02
2021-11-08 10:19:26,805+01 INFO  
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] 
(EE-ManagedThreadFactory-engine-Thread-49) 
[469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] CommandAsyncTask::Adding 
CommandMultiAsyncTasks object for command 'eb1f1fdd-a46e-45e1-a6f0-3a97fe1f6e28'
2021-11-08 10:19:26,805+01 INFO  
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] 
(EE-ManagedThreadFactory-engine-Thread-49) 
[469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] CommandMultiAsyncTasks::attachTask: 
Attaching task '4bb54004-f96c-4f14-abca-bea477d866ea' to command 
'eb1f1fdd-a46e-45e1-a6f0-3a97fe1f6e28'.
2021-11-08 10:19:27,033+01 INFO  
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] 
(EE-ManagedThreadFactory-engine-Thread-49) 
[469dbfd8-2e2f-4cb3-84b1-d456acc78fd9] Adding task 
'4bb54004-f96c-4f14-abca-bea477d866ea' (Parent Command 'CreateSnapshot', 
Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), 
polling hasn't started yet..
2021-11-08 10:19:27,282+01 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(EE-Managed

[ovirt-users] ILLEGAL volume delete via vdsm-client

2021-11-16 Thread francesco--- via Users
Hi all,

I'm trying to delete via vdsm-client toolan illegal volume that is not listed 
in the engine database. The volume ID is 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8:

[root@ovirthost ~]# vdsm-tool dump-volume-chains 
e25db7d0-060a-4046-94b5-235f38097cd8

Images volume chains (base volume first)

   image:4d79c1da-34f0-44e3-8b92-c4bcb8524d83

 Error: more than one volume pointing to the same parent volume 
e.g: (_BLANK_UUID<-a), (a<-b), (a<-c)

 Unordered volumes and children:

 - ---- <- 
5aad30c7-96f0-433d-95c8-2317e5f80045
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, 
type: SPARSE, capacity: 214748364800, truesize: 165493616640

 - 5aad30c7-96f0-433d-95c8-2317e5f80045 <- 
5cb3fe58-3e01-4d32-bc7c-5907a4f858a8
   status: OK, voltype: LEAF, format: COW, legality: ILLEGAL, type: 
SPARSE, capacity: 214748364800, truesize: 8759619584

 - 5aad30c7-96f0-433d-95c8-2317e5f80045 <- 
674e85d8-519a-461f-9dd6-aca44798e088
   status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: 
SPARSE, capacity: 214748364800, truesize: 200704

With the command vdsm-client Volume getInfo I can retrieve the info about the 
volume 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8:

 vdsm-client Volume getInfo storagepoolID=c0e7a0c5-8048-4f30-af08-cbd17d797e3b 
volumeID=5cb3fe58-3e01-4d32-bc7c-5907a4f858a8 
storagedomainID=e25db7d0-060a-4046-94b5-235f38097cd8 
imageID=4d79c1da-34f0-44e3-8b92-c4bcb8524d83
{
"apparentsize": "8759676160",
"capacity": "214748364800",
"children": [],
"ctime": "1634958924",
"description": "",
"disktype": "DATA",
"domain": "e25db7d0-060a-4046-94b5-235f38097cd8",
"format": "COW",
"generation": 0,
"image": "4d79c1da-34f0-44e3-8b92-c4bcb8524d83",
"lease": {
"offset": 0,
"owners": [],
"path": 
"/rhev/data-center/mnt/ovirthost.com:_data/e25db7d0-060a-4046-94b5-235f38097cd8/images/4d79c1da-34f0-44e3-8b92-c4bcb8524d83/5cb3fe58-3e01-4d32-bc7c-5907a4f858a8.lease",
"version": null
},
"legality": "ILLEGAL",
"mtime": "0",
"parent": "5aad30c7-96f0-433d-95c8-2317e5f80045",
"pool": "",
"status": "ILLEGAL",
"truesize": "8759619584",
"type": "SPARSE",
"uuid": "5cb3fe58-3e01-4d32-bc7c-5907a4f858a8",
"voltype": "LEAF"
}

I can't remove it due to the following error:

vdsm-client Volume delete storagepoolID=c0e7a0c5-8048-4f30-af08-cbd17d797e3b 
volumeID=5cb3fe58-3e01-4d32-bc7c-5907a4f858a8 
storagedomainID=e25db7d0-060a-4046-94b5-235f38097cd8 
imageID=4d79c1da-34f0-44e3-8b92-c4bcb8524d83 force=true
vdsm-client: Command Volume.delete with args {'storagepoolID': 
'c0e7a0c5-8048-4f30-af08-cbd17d797e3b', 'volumeID': 
'5cb3fe58-3e01-4d32-bc7c-5907a4f858a8', 'storagedomainID': 
'e25db7d0-060a-4046-94b5-235f38097cd8', 'imageID': 
'4d79c1da-34f0-44e3-8b92-c4bcb8524d83', 'force': 'true'} failed:
(code=309, message=Unknown pool id, pool not connected: 
('c0e7a0c5-8048-4f30-af08-cbd17d797e3b',))

I'm performing the operation directly on the SPM. I searched for a while but I 
didn't find anything usefull. Any tips or doc that I missed?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OYJY72UW4KZCYJZY3KRSOLR7ASQPPUVZ/


[ovirt-users] Error while removing snapshot: Unable to get volume info

2022-01-10 Thread francesco--- via Users
Hi all,

I'm trying to remove a snapshot from a HA VM in a setup with glusterfs (2 nodes 
C8 stream oVirt 4.4 + 1 arbiter C8). The error that appears in the vdsm log of 
the host is:

2022-01-10 09:33:03,003+0100 ERROR (jsonrpc/4) [api] FINISH merge error=Merge 
failed: {'top': '441354e7-c234-4079-b494-53fa99cdce6f', 'base': 
'fdf38f20-3416-4d75-a159-2a341b1ed637', 'job': 
'50206e3a-8018-4ea8-b191-e4bc859ae0c7', 'reason': 'Unable to get volume info 
for domain 574a3cd1-5617-4742-8de9-4732be4f27e0 volume 
441354e7-c234-4079-b494-53fa99cdce6f'} (api:131)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/livemerge.py", line 285, in 
merge
drive.domainID, drive.poolID, drive.imageID, job.top)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5988, in 
getVolumeInfo
(domainID, volumeID))
vdsm.virt.errors.StorageUnavailableError: Unable to get volume info for domain 
574a3cd1-5617-4742-8de9-4732be4f27e0 volume 441354e7-c234-4079-b494-53fa99cdce6f

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in 
method
ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 776, in merge
drive, baseVolUUID, topVolUUID, bandwidth, jobUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5833, in merge
driveSpec, baseVolUUID, topVolUUID, bandwidth, jobUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/livemerge.py", line 288, in 
merge
str(e), top=top, base=job.base, job=job_id)

The volume list in the host differs from the engine one:

HOST:

vdsm-tool dump-volume-chains 574a3cd1-5617-4742-8de9-4732be4f27e0 | grep -A10 
0b995271-e7f3-41b3-aff7-b5ad7942c10d
   image:0b995271-e7f3-41b3-aff7-b5ad7942c10d

 - fdf38f20-3416-4d75-a159-2a341b1ed637
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, 
type: SPARSE, capacity: 53687091200, truesize: 44255387648

 - 10df3adb-38f4-41d1-be84-b8b5b86e92cc
   status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: 
SPARSE, capacity: 53687091200, truesize: 7335407616

ls -1 0b995271-e7f3-41b3-aff7-b5ad7942c10d
10df3adb-38f4-41d1-be84-b8b5b86e92cc
10df3adb-38f4-41d1-be84-b8b5b86e92cc.lease
10df3adb-38f4-41d1-be84-b8b5b86e92cc.meta
fdf38f20-3416-4d75-a159-2a341b1ed637
fdf38f20-3416-4d75-a159-2a341b1ed637.lease
fdf38f20-3416-4d75-a159-2a341b1ed637.meta


ENGINE:

engine=# select * from images where 
image_group_id='0b995271-e7f3-41b3-aff7-b5ad7942c10d';
-[ RECORD 1 ]-+-
image_guid| 10df3adb-38f4-41d1-be84-b8b5b86e92cc
creation_date | 2022-01-07 11:23:43+01
size  | 53687091200
it_guid   | ----
parentid  | 441354e7-c234-4079-b494-53fa99cdce6f
imagestatus   | 1
lastmodified  | 2022-01-07 11:23:39.951+01
vm_snapshot_id| bd2291a4-8018-4874-a400-8d044a95347d
volume_type   | 2
volume_format | 4
image_group_id| 0b995271-e7f3-41b3-aff7-b5ad7942c10d
_create_date  | 2022-01-07 11:23:41.448463+01
_update_date  | 2022-01-07 11:24:10.414777+01
active| t
volume_classification | 0
qcow_compat   | 2
-[ RECORD 2 ]-+-
image_guid| 441354e7-c234-4079-b494-53fa99cdce6f
creation_date | 2021-12-15 07:16:31.647+01
size  | 53687091200
it_guid   | ----
parentid  | fdf38f20-3416-4d75-a159-2a341b1ed637
imagestatus   | 1
lastmodified  | 2022-01-07 11:23:41.448+01
vm_snapshot_id| 2d610958-59e3-4685-b209-139b4266012f
volume_type   | 2
volume_format | 4
image_group_id| 0b995271-e7f3-41b3-aff7-b5ad7942c10d
_create_date  | 2021-12-15 07:16:32.37005+01
_update_date  | 2022-01-07 11:23:41.448463+01
active| f
volume_classification | 1
qcow_compat   | 0
-[ RECORD 3 ]-+-
image_guid| fdf38f20-3416-4d75-a159-2a341b1ed637
creation_date | 2020-08-12 17:16:07+02
size  | 53687091200
it_guid   | ----
parentid  | ----
imagestatus   | 4
lastmodified  | 2021-12-15 07:16:32.369+01
vm_snapshot_id| 603811ba-3cdd-4388-a971-05e300ced0c3
volume_type   | 2
volume_format | 4
image_group_id| 0b995271-e7f3-41b3-aff7-b5ad7942c10d
_create_date  | 2020-08-12 17:16:07.506823+02
_update_date  | 2021-12-15 07:16:32.37005+01
active| f
volume_classification | 1
qcow_compat   | 2

However in the engine gui I see only two snapshots ID:

1- 10df3adb-38f4

[ovirt-users] Console - VNC password is 12 characters long, only 8 permitted

2022-02-14 Thread francesco--- via Users
Hi all,

I'm using websockify + noVNC for expose the vm console via browser getting  the 
graphicsconsoles ticket via API. Everything works fine for every other host 
that I have (more than 200), the console works either via oVirt engine and via 
browser) but just for a single host (CentOS Stream release 8, oVirt 4.4.9) the 
console works only via engine but when I try the connection via browser I get 
the following error (vdsm log of the host):

 ERROR FINISH updateDevice error=unsupported configuration: VNC password is 12 
characters long, only 8 permitted 
 Traceback (most recent call last): 
 
   File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in 
method   
 ret = func(*args, **kwargs)
 
   File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 372, in 
updateDevice
 return self.vm.updateDevice(params)
 
   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3389, in 
updateDevice   
 return self._updateGraphicsDevice(params)  
 
   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 3365, in 
_updateGraphicsDevice  
 params['params']   
 
   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5169, in 
_setTicketForGraphicDev
 self._dom.updateDeviceFlags(xmlutils.tostring(graphics), 0)
 
   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in 
f
 ret = attr(*args, **kwargs)
 
   File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper
 ret = f(*args, **kwargs)   
 
   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in 
wrapper  
 return func(inst, *args, **kwargs) 
 
   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3244, in 
updateDeviceFlags 
 raise libvirtError('virDomainUpdateDeviceFlags() failed')  
 
 libvirt.libvirtError: unsupported configuration: VNC password is 12 characters 
long, only 8 permitted


The error is pretty much self explanatory but, I can't manage to figure out why 
only on this server and I wonder if I can set the length of the generated vnc 
password somewhere.

Thank you for your time,
Francesco
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HIBAN3JJHJYRWEN7UVFIRB57URLYWEFJ/


[ovirt-users] GlusterFS poor performance

2022-03-03 Thread francesco--- via Users
Hi all,

I'm running a glusterFS setup v 8.6 with two node and one arbiter. Both nodes 
and arbiter are CentOS 8 Stream with oVirt 4.4. Under gluster I have a LVM thin 
partition.

VMs running in this cluster have really poor write performance, when a test 
directly performend on the disk score about 300 MB/s

dd test on host1:

[root@ovirt-host1 tmp]# dd if=/dev/zero of=./foo.dat bs=256M count=1 oflag=dsync
1+0 records in
1+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.839861 s, 320 MB/s

dd test on host1 on gluster:

[root@ovirt-host1 tmp]# dd if=/dev/zero 
of=/rhev/data-center/mnt/glusterSD/ovirt-host1:_data/foo.dat bs=256M count=1 
oflag=dsync
1+0 records in
1+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 50.6889 s, 5.3 MB/s

Nontheless, the write results in a VM inside the cluster is a little bit faster 
(dd results vary from 15 MB/s to 60 MB/s)  and this is very strange to me:

root@vm1-ha:/tmp# dd if=/dev/zero of=./foo.dat bs=256M count=1 oflag=dsync; rm 
-f ./foo.dat
1+0 records in
1+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 5.58727 s, 48.0 MB/s


Here's the actual gluster configuration, I also applied  some paramaters in 
/var/lib/glusterd/groups/virt as mentioned in other ovirt thread related I 
found.


gluster volume info data

Volume Name: data
Type: Replicate
Volume ID: 09b532eb-57de-4c29-862d-93993c990e32
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt-host1:/gluster_bricks/data/data
Brick2: ovirt-host2:/gluster_bricks/data/data
Brick3: ovirt-arbiter:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
server.event-threads: 4
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.server-quorum-type: server
cluster.lookup-optimize: off
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.choose-local: off
client.event-threads: 4
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: on
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable


The speed between two hosts is about 1Gb/s:

[root@ovirt-host1 ~]# iperf3 -c ovirt-host2 -p 5002
Connecting to host ovirt-host2 port 5002
[  5] local x.x.x.x port 58072 connected to y.y.y.y port 5002
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec   112 MBytes   938 Mbits/sec  117375 KBytes
[  5]   1.00-2.00   sec   112 MBytes   937 Mbits/sec0397 KBytes
[  5]   2.00-3.00   sec   110 MBytes   924 Mbits/sec   18344 KBytes
[  5]   3.00-4.00   sec   112 MBytes   936 Mbits/sec0369 KBytes
[  5]   4.00-5.00   sec   111 MBytes   927 Mbits/sec   12386 KBytes
[  5]   5.00-6.00   sec   112 MBytes   938 Mbits/sec0471 KBytes
[  5]   6.00-7.00   sec   108 MBytes   909 Mbits/sec   34382 KBytes
[  5]   7.00-8.00   sec   112 MBytes   942 Mbits/sec0438 KBytes
[  5]   8.00-9.00   sec   111 MBytes   928 Mbits/sec   38372 KBytes
[  5]   9.00-10.00  sec   111 MBytes   934 Mbits/sec0481 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec  1.08 GBytes   931 Mbits/sec  219 sender
[  5]   0.00-10.04  sec  1.08 GBytes   926 Mbits/sec  receiver

iperf Done.

Between nodes and arbiter about 200MB/s

[  5] local ovirt-arbiter port 45220 connected to ovirt-host1 port 5002
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  30.6 MBytes   257 Mbits/sec  1177281 KBytes
[  5]   1.00-2.00   sec  26.2 MBytes   220 Mbits/sec0344 KBytes
[  5]   2.00-3.00   sec  28.8 MBytes   241 Mbits/sec   15288 KBytes
[  5]   3.00-4.00   sec  26.2 MBytes   220 Mbits/sec0352 KBytes
[  5]   4.00-5.00   sec  30.0 MBytes   252 Mbits/sec   32293 KBytes
[  5]   5.00-6.00   sec  26.2 MBytes   220 Mbits/sec0354 KBytes
[  5]   6.00-7.00   sec  30.0 MBytes   252 Mbits/sec   32293 KBytes
[  5]   7.00-8.00   sec  27.5 MBytes   231 Mbits/sec0355 KBytes
[  5]   8.00-9.00   sec  28.8 MBytes   241 Mbits/sec   30294 KBytes
[  5]   9.00-10.00  sec  26.2 MBytes   220 Mbits/sec3250 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec   281 MBytes   235 Mbits/sec  1289 sender
[  5]   0.00-10.03  sec   277 MBytes   232 Mbits/sec  receiver

iperf Done.



I definitely missing something obvious and I'm not a gluster/ovirt black 
bealt... Can anyone point me to the right way?

Thank you for your time.

Regards,
Francesco
___
Users ma

[ovirt-users] Support for OpenStack Glance is now deprecated

2022-07-25 Thread francesco--- via Users
Hi all,

as I read in the documentation 
https://www.ovirt.org/documentation/administration_guide/index.html:

"Support for OpenStack Glance is now deprecated. This functionality will be 
removed in a later release."

Do you know any alternative to Glance for a "single point" images archives for 
creating template that could be integrated with oVirt or do I need to use 
storage domain for achieve a similar result?

Thank you for your time and help,
Francesco
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JVSDBZ7SQ6NMAIV73SZAA67XRVG2JS75/


[ovirt-users] Split hosts in a new engine - oVirt 4.4.5

2022-12-29 Thread francesco--- via Users
Hello everyone!

We have less than 500 hosts and ~800 vms handled by our engine and we could 
benefit from deploy another engine for splitting the workload (in ex.:we 
perform daily and weekly backup via python script, scheduled snapshot 
(dell+add). Is there any way for migrating/moving a host from an engine to a 
new one (same version, ofc)?

I thought about export a backup and then import but then I'll have to "select" 
whihch host to import and remove all the structure (DC/cluster/host/storoge) 
from the old one... a bit tricky and dangerous, in my opinion.

Anyone that has already done this?

Thank you for your time and happy holidays!

Francesco
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PO6CNFBC2OKF57AUXZP533TX5GEWQJPA/


[ovirt-users] Update path of hosted_storage domain

2023-08-18 Thread francesco--- via Users
Hello everyone,

we have a self hosted engine environment (oVirt 4.4.5) that use a replica 2 + 
arbiter glusterfs. Those servers are both glusterfs nodes and oVirt node of the 
hosted engine.

For an upgrade we followed this guide 
https://access.redhat.com/documentation/it-it/red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/replacing_primary_storage_host.

The planned upgrade was adding 2 new servers (node3 and node4) that would 
replace the existing ones. We added to oVirt cluster and the glusterfs pool the 
servers and moved the engine in those new hosts, without touching the underlay 
glusterfs configuration. After the step n° 18 ("Reboot the existing and 
replacement hosts.Wait until all hosts are available before continuing.") of 
the guide we tried to start the engine but we got error on the hosted_storage 
domain due to old path (the glusterfs mounted path was "node1" and the 
backupvolfile was "node2").

For avoiding corruption we updated the database with the correct path and mount 
options accordly with the new configuration edited in the file 
/etc/ovirt-hosted-engine/hosted-engine.conf (as written in the guide).

If we try to detach the node1 brick everything stop working causing storage 
error. We noticed that reference of node1 is still present in the file 
/rhev/data-center/mnt/glusterSD/node1.ovirt.xyz:_engine/36c75f6e-d95d-48f4-9ceb-ad2895ab2123/dom_md/metadata,
 on both of new hosts (node3 and node4).

I'll be more than glad to attach any log file needed for understanding what's 
going on, and thank you whoever will take time for helping me out :)

Regards,
Francesco

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZX23UGI6E62NLQFX4CTMZGCRCPERE32P/