[ovirt-users] Re: Broken links on oVIrt site

2021-07-06 Thread Ritesh Chikatwar
Nur,

You can contribute to this repo https://github.com/oVirt/ovirt-site

On Wed, Jul 7, 2021 at 11:36 AM Nur Imam Febrianto 
wrote:

> Sorry dumb question. Want to help, but where I can submit the working link
> ?
>
>
>
> Thanks.
>
>
>
> Regards,
>
> Nur Imam Febrianto
>
>
>
> *From: *Sandro Bonazzola 
> *Sent: *06 July 2021 23:43
> *To: *oVirt Users 
> *Subject: *[ovirt-users] Broken links on oVIrt site
>
>
>
> Sending to the list, just in case someone would have some time and would
> like to help fixing any of them:
>
> - ./_site/develop/developer-guide/db-issues/postgres.html
>
>   *  External link 
> http://sourcefreedom.com/tuning-postgresql-9-0-with-pgtune/ 
> 
>  failed: 404 No error
>
>   *  External link 
> http://www.postgresql.org/docs/9.1/static/wal-configuration.html%20WAL%20Configuration
>  
> 
>  failed: 404 No error
>
> - ./_site/develop/infra/jenkins.html
>
>   *  External link http://www.cyberciti.biz/faq/linux-add-a-swap-file-howto/ 
> 
>  failed: 403 No error
>
> - ./_site/develop/release-management/features/gluster/gluster-dr.html
>
>   *  External link 
> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/
>  
> 
>  failed: 404 No error
>
> - 
> ./_site/develop/release-management/features/gluster/gluster-geo-replication.html
>
>   *  External link 
> https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/ 
> 
>  failed: 404 No error
>
> - 
> ./_site/develop/release-management/features/gluster/gluster-hooks-management.html
>
>   *  External link 
> https://docs.gluster.org/en/latest/Administrator%20Guide/Hook-scripts/ 
> 
>  failed: 404 No error
>
> - ./_site/develop/release-management/features/network/ipv6-support.html
>
>   *  External link 
> http://www.cyberciti.biz/faq/redhat-centos-rhel-fedora-linux-add-multiple-ip-samenic/
>  
> 
>  failed: 403 No error
>
>   *  External link 
> http://www.cyberciti.biz/faq/rhel-redhat-fedora-centos-ipv6-network-configuration/
>  
> 

[ovirt-users] Re: Broken links on oVIrt site

2021-07-06 Thread Nur Imam Febrianto
Sorry dumb question. Want to help, but where I can submit the working link ?

Thanks.

Regards,
Nur Imam Febrianto

From: Sandro Bonazzola
Sent: 06 July 2021 23:43
To: oVirt Users
Subject: [ovirt-users] Broken links on oVIrt site

Sending to the list, just in case someone would have some time and would like 
to help fixing any of them:

- ./_site/develop/developer-guide/db-issues/postgres.html

  *  External link 
http://sourcefreedom.com/tuning-postgresql-9-0-with-pgtune/
 failed: 404 No error

  *  External link 
http://www.postgresql.org/docs/9.1/static/wal-configuration.html%20WAL%20Configuration
 failed: 404 No error

- ./_site/develop/infra/jenkins.html

  *  External link 
http://www.cyberciti.biz/faq/linux-add-a-swap-file-howto/
 failed: 403 No error

- ./_site/develop/release-management/features/gluster/gluster-dr.html

  *  External link 
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/
 failed: 404 No error

- 
./_site/develop/release-management/features/gluster/gluster-geo-replication.html

  *  External link 
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
 failed: 404 No error

- 
./_site/develop/release-management/features/gluster/gluster-hooks-management.html

  *  External link 
https://docs.gluster.org/en/latest/Administrator%20Guide/Hook-scripts/
 failed: 404 No error

- ./_site/develop/release-management/features/network/ipv6-support.html

  *  External link 
http://www.cyberciti.biz/faq/redhat-centos-rhel-fedora-linux-add-multiple-ip-samenic/
 failed: 403 No error

  *  External link 
http://www.cyberciti.biz/faq/rhel-redhat-fedora-centos-ipv6-network-configuration/

[ovirt-users] Re: Strange Issue with imageio

2021-07-06 Thread Nur Imam Febrianto
After upgrading to 4.4.7 and when the registered bug was fixed in 4.4.7, this 
issue still occurred in my environment. I need to click test connection first 
to make my ISO upload process working. Another finding in imageio logs is :

2021-07-07 12:48:56,255 INFO(Thread-14) [http] OPEN connection=14 
client=:::xxx.xxx.xxx.xxx
2021-07-07 12:49:56,317 WARNING (Thread-14) [http] Timeout reading or writing 
to socket: The read operation timed out
2021-07-07 12:49:56,318 INFO(Thread-14) [http] CLOSE connection=14 
client=::: xxx.xxx.xxx.xxx [connection 1 ops, 60.062147 s] [dispatch 1 ops, 
0.000547 s]

Any idea ?

Regards,
Nur Imam Febrianto

From: Nir Soffer
Sent: 01 July 2021 23:48
To: Gianluca Cecchi
Cc: Eyal Shenitzky; Nur Imam 
Febrianto; oVirt Users
Subject: Re: [ovirt-users] Re: Strange Issue with imageio

On Thu, Jul 1, 2021 at 11:15 AM Gianluca Cecchi
 wrote:
>
> On Thu, May 27, 2021 at 7:43 AM Eyal Shenitzky  wrote:
>>
>> This bug is targeted to be fixed in 4.4.7 so 4.4.6 doesn't contain the fix.
>>
>
> But is there a workaround for this?
> On a single host environment with external engine and local storage and 4.4.5 
> it seems that uploading an iso always gives OK without uploading anything.
> Both if selecting test connection or not...
> Is it only related to the GUI or in generale even if I use the API?

I don't know about any issues using the API, and it is used by backup
applications to backup and restore vms, so it should be more reliable.

Nir

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OC6IPTJE6QNDUZRAMVGU2DULCYOPZ2OX/


[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Nur Imam Febrianto
Already tried it at one 4.4.7 host, and it solves the issue.
Maybe this issue should marked as a critical one, because the host is unusable 
at all if upgraded to 4.4.7.
😊

Regards,
Nur Imam Febrianto

Sent from Mail for Windows 10

From: Nur Imam Febrianto
Sent: 07 July 2021 9:02
To: Klaas Demter; 
users@ovirt.org
Subject: [ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 
4.4.7 host

Where should I done this ?
At the Host ? or at HE ?

Thanks.

Regards,
Nur Imam Febrianto

From: Klaas Demter
Sent: 07 July 2021 3:31
To: users@ovirt.org
Subject: [ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 
4.4.7 host


https://bugzilla.redhat.com/show_bug.cgi?id=1979624

run: semodule -B; touch /.autorelabel; reboot

report back if it fixes everything


On 7/6/21 5:40 PM, Nur Imam Febrianto wrote:
I’m having similar problem like this. 15 host, 7 of them already upgraded to 
4.4.7 and I can’t migrate any VM or HE from 4.4.6 host to 4.4.7.

Regards,
Nur Imam Febrianto

From: Sandro Bonazzola
Sent: 06 July 2021 19:37
To: oVirt Users; Arik Hadas
Subject: [ovirt-users] Failing to migrate hosted engine from 4.4.6 host to 
4.4.7 host

Hi,
I update the hosted engine to 4.4.7 and one of the 2 nodes where the engine is 
running.
Current status is:
- Hosted engine at 4.4.7 running on Node 0
- Node 0 at 4.4.6
- Node 1 at 4.4.7

Now, moving Node 0 to maintenance successfully moved the SPM from Node 0 to 
Node 1 but while trying to migrate hosted engine I get on Node 0 vdsm.log:

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START 
repoStats(domains=()) from=:::10.46.8.133,35048, 
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:48)

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH repoStats 
return={'1996dc3b-d33f-49cb-b32a-8f7b1d50af5e': {'code': 0, 'lastCheck': '3.0', 
'delay': '0.00114065', 'valid': True, 'version': 5, 'acq

uired': True, 'actual': True}} from=:::10.46.8.133,35048, 
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:54)

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START 
multipath_health() from=:::10.46.8.133,35048, 
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:48)

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH 
multipath_health return={} from=:::10.46.8.133,35048, 
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:54)

2021-07-06 12:25:07,883+ ERROR (migsrc/b2072331) [virt.vm] 
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') can't connect to virtlogd: Unable 
to open system token /run/libvirt/common/system.token: Permission de

nied (migration:294)

2021-07-06 12:25:07,888+ INFO  (jsonrpc/5) [api.host] FINISH getStats 
return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
from=:::10.46.8.133,35048 (api:54)

2021-07-06 12:25:08,166+ ERROR (migsrc/b2072331) [virt.vm] 
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') Failed to migrate (migration:467)

Traceback (most recent call last):

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 441, in 
_regular_run

time.time(), machineParams

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 537, in 
_startUnderlyingMigration

self._perform_with_conv_schedule(duri, muri)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 626, in 
_perform_with_conv_schedule

self._perform_migration(duri, muri)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 555, in 
_perform_migration

self._migration_flags)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 159, in 
call

return getattr(self._vm._dom, name)(*a, **kw)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f

ret = attr(*args, **kwargs)

  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper

ret = f(*args, **kwargs)

  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in 
wrapper

return func(inst, *args, **kwargs)

  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in 
migrateToURI3

raise libvirtError('virDomainMigrateToURI3() failed')

libvirt.libvirtError: can't connect to virtlogd: Unable to open system token 
/run/libvirt/common/system.token: Permission denied

2021-07-

[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Nur Imam Febrianto
Where should I done this ?
At the Host ? or at HE ?

Thanks.

Regards,
Nur Imam Febrianto

From: Klaas Demter
Sent: 07 July 2021 3:31
To: users@ovirt.org
Subject: [ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 
4.4.7 host


https://bugzilla.redhat.com/show_bug.cgi?id=1979624

run: semodule -B; touch /.autorelabel; reboot

report back if it fixes everything


On 7/6/21 5:40 PM, Nur Imam Febrianto wrote:
I’m having similar problem like this. 15 host, 7 of them already upgraded to 
4.4.7 and I can’t migrate any VM or HE from 4.4.6 host to 4.4.7.

Regards,
Nur Imam Febrianto

From: Sandro Bonazzola
Sent: 06 July 2021 19:37
To: oVirt Users; Arik Hadas
Subject: [ovirt-users] Failing to migrate hosted engine from 4.4.6 host to 
4.4.7 host

Hi,
I update the hosted engine to 4.4.7 and one of the 2 nodes where the engine is 
running.
Current status is:
- Hosted engine at 4.4.7 running on Node 0
- Node 0 at 4.4.6
- Node 1 at 4.4.7

Now, moving Node 0 to maintenance successfully moved the SPM from Node 0 to 
Node 1 but while trying to migrate hosted engine I get on Node 0 vdsm.log:

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START 
repoStats(domains=()) from=:::10.46.8.133,35048, 
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:48)

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH repoStats 
return={'1996dc3b-d33f-49cb-b32a-8f7b1d50af5e': {'code': 0, 'lastCheck': '3.0', 
'delay': '0.00114065', 'valid': True, 'version': 5, 'acq

uired': True, 'actual': True}} from=:::10.46.8.133,35048, 
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:54)

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START 
multipath_health() from=:::10.46.8.133,35048, 
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:48)

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH 
multipath_health return={} from=:::10.46.8.133,35048, 
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:54)

2021-07-06 12:25:07,883+ ERROR (migsrc/b2072331) [virt.vm] 
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') can't connect to virtlogd: Unable 
to open system token /run/libvirt/common/system.token: Permission de

nied (migration:294)

2021-07-06 12:25:07,888+ INFO  (jsonrpc/5) [api.host] FINISH getStats 
return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
from=:::10.46.8.133,35048 (api:54)

2021-07-06 12:25:08,166+ ERROR (migsrc/b2072331) [virt.vm] 
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') Failed to migrate (migration:467)

Traceback (most recent call last):

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 441, in 
_regular_run

time.time(), machineParams

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 537, in 
_startUnderlyingMigration

self._perform_with_conv_schedule(duri, muri)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 626, in 
_perform_with_conv_schedule

self._perform_migration(duri, muri)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 555, in 
_perform_migration

self._migration_flags)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 159, in 
call

return getattr(self._vm._dom, name)(*a, **kw)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f

ret = attr(*args, **kwargs)

  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper

ret = f(*args, **kwargs)

  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in 
wrapper

return func(inst, *args, **kwargs)

  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in 
migrateToURI3

raise libvirtError('virDomainMigrateToURI3() failed')

libvirt.libvirtError: can't connect to virtlogd: Unable to open system token 
/run/libvirt/common/system.token: Permission denied

2021-07-06 12:25:08,197+ INFO  (jsonrpc/6) [api.virt] START 
getMigrationStatus() from=:::10.46.8.133,35048, flow_id=4e86b85d, 
vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:48)

2021-07-06 12:25:08,197+ INFO  (jsonrpc/6) [api.virt] FINISH 
getMigrationStatus return={'status': {'code': 0, 'message': 'Done'}, 
'migrationStats': {'status': {'code': 12, 'message': 'Fatal error during migr

ation'}, 'progress': 0}} from=:::10.46.8.133,35048, flow_id=4e86b85d, 
vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:54)
On node 0:
# ls -

[ovirt-users] Re: Updates Failing

2021-07-06 Thread Klaas Demter
You need to put the host into maintenance mode if you have multiple 
hosts. If you only have one you need to shutdown all normal VMs and put 
hosted engine into maintenance mode and shut that down as well.


On 7/6/21 9:44 PM, Gary Pedretty wrote:

Getting errors trying to run dnf/yum update due to a vdsm issue.


yum update
Last metadata expiration check: 0:17:33 ago on Tue 06 Jul 2021 
11:17:05 AM AKDT.

Error: Running QEMU processes found, cannot upgrade Vdsm.

Current running version of vdsm is


vdsm-4.40.60.7-1.el8


CentOS Stream
RHEL - 8.5 - 3.el8

kernel
4.18.0 - 310.el8.x86_64



___
Gary Pedretty
IT Manager
Ravn Alaska

Office: 907-266-8451
Mobile: 907-388-2247
Email: gary.pedre...@ravnalaska.com 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EB36I2F55Z6QTBXJVISZ2GMFCLCQHGLU/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5PJI5HIQ463YP44ZEK7DYTMDBSJX7W2Q/


[ovirt-users] Updates failing

2021-07-06 Thread Gary Pedretty
Getting errors trying to run dnf/yum update due to a vdsm issue.


yum update
Last metadata expiration check: 0:17:33 ago on Tue 06 Jul 2021 11:17:05 AM AKDT.
Error: Running QEMU processes found, cannot upgrade Vdsm.

Current running version of vdsm is


vdsm-4.40.60.7-1.el8


CentOS Stream
RHEL - 8.5 - 3.el8

kernel
4.18.0 - 310.el8.x86_64



___
Gary Pedretty
IT Manager
Ravn Alaska

Office: 907-266-8451
Mobile: 907-388-2247
Email: gary.pedre...@ravnalaska.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4WCHSF5D5EYBUCRJ2UMBZP343QMHDN3Q/


[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Klaas Demter

https://bugzilla.redhat.com/show_bug.cgi?id=1979624

run: semodule -B; touch /.autorelabel; reboot

report back if it fixes everything


On 7/6/21 5:40 PM, Nur Imam Febrianto wrote:


I’m having similar problem like this. 15 host, 7 of them already 
upgraded to 4.4.7 and I can’t migrate any VM or HE from 4.4.6 host to 
4.4.7.


Regards,

Nur Imam Febrianto

*From: *Sandro Bonazzola 
*Sent: *06 July 2021 19:37
*To: *oVirt Users ; Arik Hadas 

*Subject: *[ovirt-users] Failing to migrate hosted engine from 4.4.6 
host to 4.4.7 host


Hi,

I update the hosted engine to 4.4.7 and one of the 2 nodes where the 
engine is running.


Current status is:

- Hosted engine at 4.4.7 running on Node 0

- Node 0 at 4.4.6

- Node 1 at 4.4.7

Now, moving Node 0 to maintenance successfully moved the SPM from Node 
0 to Node 1 but while trying to migrate hosted engine I get on Node 0 
vdsm.log:


2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START 
repoStats(domains=()) from=:::10.46.8.133,35048, 
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:48)
2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH repoStats 
return={'1996dc3b-d33f-49cb-b32a-8f7b1d50af5e': {'code': 0, 'lastCheck': '3.0', 
'delay': '0.00114065', 'valid': True, 'version': 5, 'acq
uired': True, 'actual': True}} from=:::10.46.8.133,35048, 
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:54)
2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START 
multipath_health() from=:::10.46.8.133,35048, 
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:48)
2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH 
multipath_health return={} from=:::10.46.8.133,35048, 
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:54)
2021-07-06 12:25:07,883+ ERROR (migsrc/b2072331) [virt.vm] 
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') can't connect to virtlogd: Unable 
to open system token /run/libvirt/common/system.token: Permission de
nied (migration:294)
2021-07-06 12:25:07,888+ INFO  (jsonrpc/5) [api.host] FINISH getStats 
return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
from=:::10.46.8.133,35048 (api:54)
2021-07-06 12:25:08,166+ ERROR (migsrc/b2072331) [virt.vm] 
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') Failed tomigrate  (migration:467)
Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 441, in 
_regular_run
     time.time(), machineParams
   File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 537, in 
_startUnderlyingMigration
     self._perform_with_conv_schedule(duri, muri)
   File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 626, in 
_perform_with_conv_schedule
     self._perform_migration(duri, muri)
   File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 555, in 
_perform_migration
     self._migration_flags)
   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 159, in 
call
     return getattr(self._vm._dom, name)(*a, **kw)
   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in 
f
     ret = attr(*args, **kwargs)
   File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper
     ret = f(*args, **kwargs)
   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in 
wrapper
     return func(inst, *args, **kwargs)
   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, 
inmigrateToURI3
     raise libvirtError('virDomainMigrateToURI3() failed')
libvirt.libvirtError: can't connect to virtlogd: Unable to open system token 
/run/libvirt/common/system.token: Permission denied
2021-07-06 12:25:08,197+ INFO  (jsonrpc/6) [api.virt] START 
getMigrationStatus() from=:::10.46.8.133,35048, flow_id=4e86b85d, 
vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:48)
2021-07-06 12:25:08,197+ INFO  (jsonrpc/6) [api.virt] FINISH 
getMigrationStatus return={'status': {'code': 0, 'message': 'Done'}, 
'migrationStats': {'status': {'code': 12, 'message': 'Fatal error during migr
ation'}, 'progress': 0}} from=:::10.46.8.133,35048, flow_id=4e86b85d, 
vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:54)

On node 0:

# ls -lZ /run/libvirt/common/system.token
ls: cannot access '/run/libvirt/common/system.token': No such file or 
directory


On node 1:

# ls -lZ /run/libvirt/common/system.token
-rw---. 1 root root system_u:object_r:virt_var_run_t:s0 32 Jul  6 
09:29 /run/libvirt/common/system.token


any clue?

--

*Sandro Bonazzola*

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 


[ovirt-users] Updates Failing

2021-07-06 Thread Gary Pedretty
Getting errors trying to run dnf/yum update due to a vdsm issue.


yum update
Last metadata expiration check: 0:17:33 ago on Tue 06 Jul 2021 11:17:05 AM AKDT.
Error: Running QEMU processes found, cannot upgrade Vdsm.

Current running version of vdsm is


vdsm-4.40.60.7-1.el8


CentOS Stream
RHEL - 8.5 - 3.el8

kernel
4.18.0 - 310.el8.x86_64



___
Gary Pedretty
IT Manager
Ravn Alaska

Office: 907-266-8451
Mobile: 907-388-2247
Email: gary.pedre...@ravnalaska.com ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EB36I2F55Z6QTBXJVISZ2GMFCLCQHGLU/


[ovirt-users] Q: Node host becomes non-operational after upgrade

2021-07-06 Thread Andrei Verovski

Hi,


Today I upgraded oVirt Engine to 4.4.7.6, and then one of the nodes 
(running Centos Stream).

After upgrade node (node14) becomes non-operational. Same after “Reinstall”.

Additionally, there are MANY error messages:
Host node14 moved to Non-Operational state as host CPU type is not 
supported in this cluster compatibility version or is not supported at all

Quite strange, before upgrade problem with CPU host type didn't exist.
vdsm-networking service is running fine on node14.
vdsmd running but have this error message:
Jul 06 21:19:23 node14.xxx sudo[3565]: pam_systemd(sudo:session): Failed 
to create session: Start job for unit user-0.slice failed with 'canceled'


I suspect that there are unnecessary repos enabled in my CentOS stream 
node, which leads to this kind of errors.

Please can anyone check? Thanks in advance.



[root@node14 ~]# yum repolist enabled

repo id repo name
appstream CentOS Stream 8 - AppStream
baseos CentOS Stream 8 - BaseOS
epel-next Extra Packages for Enterprise Linux 8 - Next - x86_64
extras CentOS Stream 8 - Extras
ovirt-4.4 Latest oVirt 4.4 Release
ovirt-4.4-centos-ceph-pacific Ceph packages for x86_64
ovirt-4.4-centos-gluster8 CentOS-8 - Gluster 8
ovirt-4.4-centos-opstools CentOS-8 - OpsTools - collectd
ovirt-4.4-centos-stream-advanced-virtualization Advanced Virtualization 
CentOS Stream packages for x86_64

ovirt-4.4-centos-stream-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
ovirt-4.4-centos-stream-ovirt44 CentOS-8 Stream - oVirt 4.4
ovirt-4.4-copr:copr.fedorainfracloud.org:mdbarroso:ovsdbapp Copr repo 
for ovsdbapp owned by mdbarroso
ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr repo 
for gluster-ansible owned by sac
ovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection Copr 
repo for EL8_collection owned by sbonazzo

ovirt-4.4-epel Extra Packages for Enterprise Linux 8 - x86_64
ovirt-4.4-openstack-train OpenStack Train Repository
ovirt-4.4-virtio-win-latest virtio-win builds roughly matching what will 
be shipped in upcoming RHEL

powertools CentOS Stream 8 - PowerTools



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MUEX3WZJBT77RYU5XAZKJA5ZGPCYDB6U/


[ovirt-users] Broken links on oVIrt site

2021-07-06 Thread Sandro Bonazzola
Sending to the list, just in case someone would have some time and would
like to help fixing any of them:

- ./_site/develop/developer-guide/db-issues/postgres.html  *  External
link http://sourcefreedom.com/tuning-postgresql-9-0-with-pgtune/
failed: 404 No error  *  External link
http://www.postgresql.org/docs/9.1/static/wal-configuration.html%20WAL%20Configuration
failed: 404 No error- ./_site/develop/infra/jenkins.html  *  External
link http://www.cyberciti.biz/faq/linux-add-a-swap-file-howto/ failed:
403 No error- 
./_site/develop/release-management/features/gluster/gluster-dr.html
 *  External link
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/
failed: 404 No error-
./_site/develop/release-management/features/gluster/gluster-geo-replication.html
 *  External link
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
failed: 404 No error-
./_site/develop/release-management/features/gluster/gluster-hooks-management.html
 *  External link
https://docs.gluster.org/en/latest/Administrator%20Guide/Hook-scripts/
failed: 404 No error-
./_site/develop/release-management/features/network/ipv6-support.html
*  External link
http://www.cyberciti.biz/faq/redhat-centos-rhel-fedora-linux-add-multiple-ip-samenic/
failed: 403 No error  *  External link
http://www.cyberciti.biz/faq/rhel-redhat-fedora-centos-ipv6-network-configuration/
failed: 403 No error-
./_site/develop/release-management/features/network/isolated-ports.html
 *  External link
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-A9287D46-FDE0-4D64-9348-3905FEAC7FAE.html
failed: 403 No error-
./_site/develop/release-management/features/network/lldp.html  *
External link http://standards.ieee.org/getieee802/download/802.1AB-2016.zip
failed: 404 No error-
./_site/develop/release-management/features/network/networking-api-security-groups.html
 *  External link
http://www.openvswitch.org/support/dist-docs/ovn-nb.5.html failed: 404
No error- ./_site/develop/release-management/features/virt/ovmf.html
*  External link https://support.microsoft.com/en-us/kb/888929 failed:
404 No error- ./_site/develop/release-management/process/press-plan.html
 *  External link http://www.lharba.com/index.htm failed: 404 No error
 *  External link
http://www.serverwatch.com/author/Paul-Ferrill-3660.htm failed: 404 No
error  *  External link
http://www.wired.com/wiredenterprise/author/bobmcmillan/ failed: 404
No error  *  External link
http://www.wired.com/wiredenterprise/author/cade_metz/ failed: 404 No
error- ./_site/documentation/administration_guide/index.html  *
External link http://docs.ansible.com/ansible/list_of_cloud_modules.html#ovirt
failed: 404 No error  *  External link
https://access.redhat.com/articles/3215851 failed: 403 No error-
./_site/documentation/data_warehouse_guide/index.html  *  External
link 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/html-single/data_warehouse_guide/#Application_Settings_for_the_Data_Warehouse_service_in_ovirt-engine-dwhd_file
failed: 404 No error
HTML-Proofer found 19 failures!

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/64JSAGYWE6QUJOOT2P7Q6B5E5URQY6YP/


[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Arik Hadas
On Tue, Jul 6, 2021 at 7:02 PM Sandro Bonazzola  wrote:

>
>
> Il giorno mar 6 lug 2021 alle ore 17:33 Nir Soffer 
> ha scritto:
>
>> On Tue, Jul 6, 2021 at 5:58 PM Scott Worthington
>>  wrote:
>> >
>> >
>> >
>> > On Tue, Jul 6, 2021 at 8:13 AM Nir Soffer  wrote:
>> >>
>> >> On Tue, Jul 6, 2021 at 2:29 PM Sandro Bonazzola 
>> wrote:
>> >>>
>> >>>
>> >>>
>> >>> Il giorno mar 6 lug 2021 alle ore 13:03 Nir Soffer <
>> nsof...@redhat.com> ha scritto:
>> 
>>  On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet 
>> wrote:
>>  > We are installing UPS powerchute client on hypervisors.
>>  >
>>  > What is the default vms behaviour of running vms when an
>> hypervisor is
>>  > ordered to shutdown: do the vms live migrate or do they shutdown
>>  > properly (even the restart on an other host because of HA) ?
>> 
>>  In general VMs are not restarted after an unexpected shutdown, but
>> HA VMs
>>  are restarted after failures.
>> 
>>  If the HA VM has a lease, it can restart safely on another host
>> regardless of
>>  the original host status. If the HA VM does not have a lease, the
>> system must
>>  wait until the original host is up again to check if the VM is still
>>  running on this
>>  host.
>> 
>>  Arik can add more details on this.
>> >>>
>> >>>
>> >>> I think the question is not related to what happens after the host is
>> back.
>> >>> I think the question is what happens when the host goes down.
>> >>> To me, the right way to shutdown a host is putting it first to
>> maintenance (VM evacuate to other hosts) and then shutdown.
>> >>
>> >>
>> >> Right, but the we don't have integration with the UPS, so engine
>> cannot put the host
>> >> to maintenance when the host lose power and the UPS will shut it down
>> after
>> >> few minutes.
>> >
>> >
>> > This is outside of the scope of oVirt team:
>> >
>> > Perhaps one could combine multiple applications ( NUT + Ansible +
>> Nagios/Zabbix ) to notify the oVirt engine to switch a host to maintenance?
>> >
>> > NUT[0] could be configured to alert a monitoring system ( like Nagios
>> or Zabbix) to trigger an Ansible playbook [1][2] to put the host in
>> maintenance mode, and the trigger should happen before the UPS battery is
>> depleted (you'll have to account for the time it takes to live migrate VMs).
>>
>> I would trigger this once power is lost. You never know how much time
>> migration will take, so best migrate all vms immediately.
>>
>> It would be nice to integrate this with engine, but we can start by
>> something
>> like you describe, that will use engine API/SDK to prepare the hosts for
>> graceful shutdown.
>>
>
There are pros and cons to this approach.
If the workloads manage to get evacuated quickly, before libvirt-guests
starts shutting them down, that's great.
But what happens if the VMs are still migrated after libvirt-guests
initiated shutdowns?
Think about the following case:
1. A highly available VM starts migrating
2. libvirt-guests tries to shut down the guest
3. The migration completed
4. The guest shuts down while it runs on the destination host
I'm not sure that we'll treat that case as a non-intentional shutdown since
we may lose the context of the shutdown while the VM runs to the
destination host and therefore won't try to restart the VM automatically.


>
> we already have a role for immediate shutdown of the whole datacenter:
> https://github.com/oVirt/ovirt-ansible-shutdown-env
> now integrated in ansible collection
> https://github.com/oVirt/ovirt-ansible-collection/tree/master/roles/shutdown_env
>
>
>
>>
>> > [0] Network UPS Tools
>> https://networkupstools.org/docs/user-manual.chunked/index.html
>> > [1]
>> https://www.ovirt.org/develop/release-management/features/infra/ansible_modules.html
>> > [2]
>> https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_host_module.html
>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TYCMHTXMBD2KLGPYXVSB7CAUMBFWGLEP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list

[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Sandro Bonazzola
Il giorno mar 6 lug 2021 alle ore 17:33 Nir Soffer  ha
scritto:

> On Tue, Jul 6, 2021 at 5:58 PM Scott Worthington
>  wrote:
> >
> >
> >
> > On Tue, Jul 6, 2021 at 8:13 AM Nir Soffer  wrote:
> >>
> >> On Tue, Jul 6, 2021 at 2:29 PM Sandro Bonazzola 
> wrote:
> >>>
> >>>
> >>>
> >>> Il giorno mar 6 lug 2021 alle ore 13:03 Nir Soffer 
> ha scritto:
> 
>  On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet 
> wrote:
>  > We are installing UPS powerchute client on hypervisors.
>  >
>  > What is the default vms behaviour of running vms when an hypervisor
> is
>  > ordered to shutdown: do the vms live migrate or do they shutdown
>  > properly (even the restart on an other host because of HA) ?
> 
>  In general VMs are not restarted after an unexpected shutdown, but HA
> VMs
>  are restarted after failures.
> 
>  If the HA VM has a lease, it can restart safely on another host
> regardless of
>  the original host status. If the HA VM does not have a lease, the
> system must
>  wait until the original host is up again to check if the VM is still
>  running on this
>  host.
> 
>  Arik can add more details on this.
> >>>
> >>>
> >>> I think the question is not related to what happens after the host is
> back.
> >>> I think the question is what happens when the host goes down.
> >>> To me, the right way to shutdown a host is putting it first to
> maintenance (VM evacuate to other hosts) and then shutdown.
> >>
> >>
> >> Right, but the we don't have integration with the UPS, so engine cannot
> put the host
> >> to maintenance when the host lose power and the UPS will shut it down
> after
> >> few minutes.
> >
> >
> > This is outside of the scope of oVirt team:
> >
> > Perhaps one could combine multiple applications ( NUT + Ansible +
> Nagios/Zabbix ) to notify the oVirt engine to switch a host to maintenance?
> >
> > NUT[0] could be configured to alert a monitoring system ( like Nagios or
> Zabbix) to trigger an Ansible playbook [1][2] to put the host in
> maintenance mode, and the trigger should happen before the UPS battery is
> depleted (you'll have to account for the time it takes to live migrate VMs).
>
> I would trigger this once power is lost. You never know how much time
> migration will take, so best migrate all vms immediately.
>
> It would be nice to integrate this with engine, but we can start by
> something
> like you describe, that will use engine API/SDK to prepare the hosts for
> graceful shutdown.
>

we already have a role for immediate shutdown of the whole datacenter:
https://github.com/oVirt/ovirt-ansible-shutdown-env
now integrated in ansible collection
https://github.com/oVirt/ovirt-ansible-collection/tree/master/roles/shutdown_env



>
> > [0] Network UPS Tools
> https://networkupstools.org/docs/user-manual.chunked/index.html
> > [1]
> https://www.ovirt.org/develop/release-management/features/infra/ansible_modules.html
> > [2]
> https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_host_module.html
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TYCMHTXMBD2KLGPYXVSB7CAUMBFWGLEP/


[ovirt-users] Re: Any way to terminate stuck export task

2021-07-06 Thread Nir Soffer
On Tue, Jul 6, 2021 at 5:55 PM Gianluca Cecchi
 wrote:
>
> On Tue, Jul 6, 2021 at 2:52 PM Nir Soffer  wrote:
>
>>
>>
>> Too bad.
>>
>> You can evaluate how ovirt 4.4. will work with this appliance using
>> this dd command:
>>
>> dd if=/dev/zero bs=8M count=38400 of=/path/to/new/disk
>> oflag=direct conv=fsync
>>
>> We don't use dd for this, but the operation is the same on NFS < 4.2.
>>
>
> I confirm I'm able to saturate the 1Gb/s link. tried creating a 10Gb file on 
> the StoreOnce appliance
>  # time dd if=/dev/zero bs=8M count=1280 
> of=/rhev/data-center/mnt/172.16.1.137\:_nas_EXPORT-DOMAIN/ansible_ova/test.img
>  oflag=direct conv=fsync
> 1280+0 records in
> 1280+0 records out
> 10737418240 bytes (11 GB) copied, 98.0172 s, 110 MB/s
>
> real 1m38.035s
> user 0m0.003s
> sys 0m2.366s
>
> So are you saying that after upgrading to 4.4.6 (or just released 4.4.7) I 
> should be able to export with this speed?

The preallocation part will run at the same speed, and then
you need to copy the used parts of the disk, time depending
on how much data is used.

>  Or anyway I do need NFS v4.2?

Without NFS 4.2. With NFS 4.2 the entire allocation will take less than
a second without consuming any network bandwidth.

> BTW: is there any capping put in place by oVirt to the export phase (the 
> qemu-img command in practice)? Designed for example not to perturbate the 
> activity of hypervisor?Or do you think that if I have a 10Gb/s network 
> backend and powerful disks on oVirt and powerful NFS server processing power  
> I should have much more speed?

We don't have any capping in place, usually people complain that copying
images is too slow.

In general when copying to file base storage we don't use -W option
(unordered writes) so copy will be slower compared with block based
storage, when qemu-img use 8 concurrent writes. So in a way we always
cap the copies to file based storage. To get maximum throughput you need
to run multiple copies at the same time.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VE6X6ASHETSPLMQ4HTENF4D5UQPV7HQL/


[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Nur Imam Febrianto
I’m having similar problem like this. 15 host, 7 of them already upgraded to 
4.4.7 and I can’t migrate any VM or HE from 4.4.6 host to 4.4.7.

Regards,
Nur Imam Febrianto

From: Sandro Bonazzola
Sent: 06 July 2021 19:37
To: oVirt Users; Arik Hadas
Subject: [ovirt-users] Failing to migrate hosted engine from 4.4.6 host to 
4.4.7 host

Hi,
I update the hosted engine to 4.4.7 and one of the 2 nodes where the engine is 
running.
Current status is:
- Hosted engine at 4.4.7 running on Node 0
- Node 0 at 4.4.6
- Node 1 at 4.4.7

Now, moving Node 0 to maintenance successfully moved the SPM from Node 0 to 
Node 1 but while trying to migrate hosted engine I get on Node 0 vdsm.log:

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START 
repoStats(domains=()) from=:::10.46.8.133,35048, 
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:48)

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH repoStats 
return={'1996dc3b-d33f-49cb-b32a-8f7b1d50af5e': {'code': 0, 'lastCheck': '3.0', 
'delay': '0.00114065', 'valid': True, 'version': 5, 'acq

uired': True, 'actual': True}} from=:::10.46.8.133,35048, 
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:54)

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START 
multipath_health() from=:::10.46.8.133,35048, 
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:48)

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH 
multipath_health return={} from=:::10.46.8.133,35048, 
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:54)

2021-07-06 12:25:07,883+ ERROR (migsrc/b2072331) [virt.vm] 
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') can't connect to virtlogd: Unable 
to open system token /run/libvirt/common/system.token: Permission de

nied (migration:294)

2021-07-06 12:25:07,888+ INFO  (jsonrpc/5) [api.host] FINISH getStats 
return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
from=:::10.46.8.133,35048 (api:54)

2021-07-06 12:25:08,166+ ERROR (migsrc/b2072331) [virt.vm] 
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') Failed to migrate (migration:467)

Traceback (most recent call last):

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 441, in 
_regular_run

time.time(), machineParams

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 537, in 
_startUnderlyingMigration

self._perform_with_conv_schedule(duri, muri)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 626, in 
_perform_with_conv_schedule

self._perform_migration(duri, muri)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 555, in 
_perform_migration

self._migration_flags)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 159, in 
call

return getattr(self._vm._dom, name)(*a, **kw)

  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f

ret = attr(*args, **kwargs)

  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper

ret = f(*args, **kwargs)

  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in 
wrapper

return func(inst, *args, **kwargs)

  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in 
migrateToURI3

raise libvirtError('virDomainMigrateToURI3() failed')

libvirt.libvirtError: can't connect to virtlogd: Unable to open system token 
/run/libvirt/common/system.token: Permission denied

2021-07-06 12:25:08,197+ INFO  (jsonrpc/6) [api.virt] START 
getMigrationStatus() from=:::10.46.8.133,35048, flow_id=4e86b85d, 
vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:48)

2021-07-06 12:25:08,197+ INFO  (jsonrpc/6) [api.virt] FINISH 
getMigrationStatus return={'status': {'code': 0, 'message': 'Done'}, 
'migrationStats': {'status': {'code': 12, 'message': 'Fatal error during migr

ation'}, 'progress': 0}} from=:::10.46.8.133,35048, flow_id=4e86b85d, 
vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:54)
On node 0:
# ls -lZ /run/libvirt/common/system.token
ls: cannot access '/run/libvirt/common/system.token': No such file or directory

On node 1:
# ls -lZ /run/libvirt/common/system.token
-rw---. 1 root root system_u:object_r:virt_var_run_t:s0 32 Jul  6 09:29 
/run/libvirt/common/system.token

any clue?
--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat 
EMEA

sbona...@redhat.com
[https://static.redhat.com/libs/redhat/brand-assets/2/corp/logo--200.png]

[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Nir Soffer
On Tue, Jul 6, 2021 at 5:58 PM Scott Worthington
 wrote:
>
>
>
> On Tue, Jul 6, 2021 at 8:13 AM Nir Soffer  wrote:
>>
>> On Tue, Jul 6, 2021 at 2:29 PM Sandro Bonazzola  wrote:
>>>
>>>
>>>
>>> Il giorno mar 6 lug 2021 alle ore 13:03 Nir Soffer  ha 
>>> scritto:

 On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet  wrote:
 > We are installing UPS powerchute client on hypervisors.
 >
 > What is the default vms behaviour of running vms when an hypervisor is
 > ordered to shutdown: do the vms live migrate or do they shutdown
 > properly (even the restart on an other host because of HA) ?

 In general VMs are not restarted after an unexpected shutdown, but HA VMs
 are restarted after failures.

 If the HA VM has a lease, it can restart safely on another host regardless 
 of
 the original host status. If the HA VM does not have a lease, the system 
 must
 wait until the original host is up again to check if the VM is still
 running on this
 host.

 Arik can add more details on this.
>>>
>>>
>>> I think the question is not related to what happens after the host is back.
>>> I think the question is what happens when the host goes down.
>>> To me, the right way to shutdown a host is putting it first to maintenance 
>>> (VM evacuate to other hosts) and then shutdown.
>>
>>
>> Right, but the we don't have integration with the UPS, so engine cannot put 
>> the host
>> to maintenance when the host lose power and the UPS will shut it down after
>> few minutes.
>
>
> This is outside of the scope of oVirt team:
>
> Perhaps one could combine multiple applications ( NUT + Ansible + 
> Nagios/Zabbix ) to notify the oVirt engine to switch a host to maintenance?
>
> NUT[0] could be configured to alert a monitoring system ( like Nagios or 
> Zabbix) to trigger an Ansible playbook [1][2] to put the host in maintenance 
> mode, and the trigger should happen before the UPS battery is depleted 
> (you'll have to account for the time it takes to live migrate VMs).

I would trigger this once power is lost. You never know how much time
migration will take, so best migrate all vms immediately.

It would be nice to integrate this with engine, but we can start by something
like you describe, that will use engine API/SDK to prepare the hosts for
graceful shutdown.

> [0] Network UPS Tools 
> https://networkupstools.org/docs/user-manual.chunked/index.html
> [1] 
> https://www.ovirt.org/develop/release-management/features/infra/ansible_modules.html
> [2] 
> https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_host_module.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D43UXSOGTE7QHJTHUCCW63MWCYH3YM3M/


[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Arik Hadas
On Tue, Jul 6, 2021 at 3:13 PM Nir Soffer  wrote:

> On Tue, Jul 6, 2021 at 2:29 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno mar 6 lug 2021 alle ore 13:03 Nir Soffer 
>> ha scritto:
>>
>>> On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet 
>>> wrote:
>>> > We are installing UPS powerchute client on hypervisors.
>>> >
>>> > What is the default vms behaviour of running vms when an hypervisor is
>>> > ordered to shutdown: do the vms live migrate or do they shutdown
>>> > properly (even the restart on an other host because of HA) ?
>>>
>>> In general VMs are not restarted after an unexpected shutdown, but HA VMs
>>> are restarted after failures.
>>>
>>> If the HA VM has a lease, it can restart safely on another host
>>> regardless of
>>> the original host status. If the HA VM does not have a lease, the system
>>> must
>>> wait until the original host is up again to check if the VM is still
>>> running on this
>>> host.
>>>
>>> Arik can add more details on this.
>>>
>>
>> I think the question is not related to what happens after the host is
>> back.
>> I think the question is what happens when the host goes down.
>> To me, the right way to shutdown a host is putting it first to
>> maintenance (VM evacuate to other hosts) and then shutdown.
>>
>
> Right, but the we don't have integration with the UPS, so engine cannot
> put the host
> to maintenance when the host lose power and the UPS will shut it down after
> few minutes.
>
>
>> On emergency shutdown without moving the host to maintenance first I
>> think libvirt is communicating the host is going down to the guests and
>> tries to cleanly shutdown vms while the host is going down.
>> Arik please confirm :-)
>>
>
Yes, that is correct.
If the host shuts down, libvirt-guests attempt to shut down the guests
gracefully.


>
>>
>>
>>>
>>> Nir
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXVXSLXQYZX6CQPJNXKWLOMY3LQU7XJ5/
>>>
>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>> *
>>
>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7Q7XXOL3JXL2L4MP6G2Q7OJLKLBEZFVP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SESPV2OKLEOSRMDO6LTK6JZCKI63NINN/


[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Scott Worthington
On Tue, Jul 6, 2021 at 8:13 AM Nir Soffer  wrote:

> On Tue, Jul 6, 2021 at 2:29 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno mar 6 lug 2021 alle ore 13:03 Nir Soffer 
>> ha scritto:
>>
>>> On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet 
>>> wrote:
>>> > We are installing UPS powerchute client on hypervisors.
>>> >
>>> > What is the default vms behaviour of running vms when an hypervisor is
>>> > ordered to shutdown: do the vms live migrate or do they shutdown
>>> > properly (even the restart on an other host because of HA) ?
>>>
>>> In general VMs are not restarted after an unexpected shutdown, but HA VMs
>>> are restarted after failures.
>>>
>>> If the HA VM has a lease, it can restart safely on another host
>>> regardless of
>>> the original host status. If the HA VM does not have a lease, the system
>>> must
>>> wait until the original host is up again to check if the VM is still
>>> running on this
>>> host.
>>>
>>> Arik can add more details on this.
>>>
>>
>> I think the question is not related to what happens after the host is
>> back.
>> I think the question is what happens when the host goes down.
>> To me, the right way to shutdown a host is putting it first to
>> maintenance (VM evacuate to other hosts) and then shutdown.
>>
>
> Right, but the we don't have integration with the UPS, so engine cannot
> put the host
> to maintenance when the host lose power and the UPS will shut it down after
> few minutes.
>

This is outside of the scope of oVirt team:

Perhaps one could combine multiple applications ( NUT + Ansible +
Nagios/Zabbix ) to notify the oVirt engine to switch a host to maintenance?

NUT[0] could be configured to alert a monitoring system ( like Nagios or
Zabbix) to trigger an Ansible playbook [1][2] to put the host in
maintenance mode, and the trigger should happen before the UPS battery is
depleted (you'll have to account for the time it takes to live migrate VMs).

[0] Network UPS Tools
https://networkupstools.org/docs/user-manual.chunked/index.html
[1]
https://www.ovirt.org/develop/release-management/features/infra/ansible_modules.html
[2]
https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_host_module.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NJDNWLLWOOLTHK7KSDEPZBRQYEMMG5KK/


[ovirt-users] Re: Any way to terminate stuck export task

2021-07-06 Thread Gianluca Cecchi
On Tue, Jul 6, 2021 at 2:52 PM Nir Soffer  wrote:


>
> Too bad.
>
> You can evaluate how ovirt 4.4. will work with this appliance using
> this dd command:
>
> dd if=/dev/zero bs=8M count=38400 of=/path/to/new/disk
> oflag=direct conv=fsync
>
> We don't use dd for this, but the operation is the same on NFS < 4.2.
>
>
I confirm I'm able to saturate the 1Gb/s link. tried creating a 10Gb file
on the StoreOnce appliance
 # time dd if=/dev/zero bs=8M count=1280 of=/rhev/data-center/mnt/
172.16.1.137\:_nas_EXPORT-DOMAIN/ansible_ova/test.img oflag=direct
conv=fsync
1280+0 records in
1280+0 records out
10737418240 bytes (11 GB) copied, 98.0172 s, 110 MB/s

real 1m38.035s
user 0m0.003s
sys 0m2.366s

So are you saying that after upgrading to 4.4.6 (or just released 4.4.7) I
should be able to export with this speed? Or anyway I do need NFS v4.2?
BTW: is there any capping put in place by oVirt to the export phase (the
qemu-img command in practice)? Designed for example not to perturbate the
activity of hypervisor?Or do you think that if I have a 10Gb/s network
backend and powerful disks on oVirt and powerful NFS server processing
power  I should have much more speed?


> Based on the 50 MiB/s rate you reported earlier, I guess you have a
> 1Gbit network to
> this appliance, so zeroing can do up to 128 MiB/s, which will take
> about 40 minutes
> for 300G.
>
> Using NFS 4.2, fallocate will complete in less than a second.
>

I can sort of confirm this also for 4.3.10.
I have a test CentOS 7.4 VM configured as NFS server and, if I configure it
as an export domain using the default autonegotiate option, it is
(strangely enough) mounted as NFS v4.1 and the initial fallocate takes some
minutes (55Gb disk).
If I reconfigure it forcing NFS v4.2, it does it and the initial fallocate
is immediate, in the sense that "ls -l" on the export domain becomes quite
immediately the size of the virtual disk.

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YPUUKAKGVIL53JIW3EG5EOFXQJATJDUM/


[ovirt-users] Re: Issue With HE HA after upgrading to 4.4.7

2021-07-06 Thread Yedidyah Bar David
On Tue, Jul 6, 2021 at 5:08 PM Nur Imam Febrianto  wrote:
>
> Hi, Recently I’m upgrading our server cluster from 4.4.6 to 4.4.7 After 
> upgrading HE, and several Hosts, every single Host that upgraded and 
> activated have an issue with HA Score. It always shows HA Score 0, rebooting 
> the host doesn’t help. Any idea how to check this issue ?

Calculating the score takes time, and spreading this around the
cluster also takes time.

You should find more information in the ovirt-hosted-engine-ha logs
(both agent and broker).

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BFO6N6XJTNZG57HQHC7BCP2BF5H77H3D/


[ovirt-users] Issue With HE HA after upgrading to 4.4.7

2021-07-06 Thread Nur Imam Febrianto
Hi, Recently I’m upgrading our server cluster from 4.4.6 to 4.4.7 After 
upgrading HE, and several Hosts, every single Host that upgraded and activated 
have an issue with HA Score. It always shows HA Score 0, rebooting the host 
doesn’t help. Any idea how to check this issue ?
Thanks.

Regards,
Nur Imam Febrianto
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NEWUEVLA623IWFN4NTCSLJICF3G7HS3E/


[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Sandro Bonazzola
Il giorno mar 6 lug 2021 alle ore 15:36 Nir Soffer  ha
scritto:

> On Tue, Jul 6, 2021 at 4:27 PM Sandro Bonazzola 
> wrote:
>
> >>> This looks like the selinux issue we had in libvirt 7.4. Do we have
> the latest
> >>> selinux-policy-target package on the host?
> >>
> >>
> >> Yeah, seems like https://bugzilla.redhat.com/show_bug.cgi?id=1964317
> >
> >
> > In order to get the migration working this solved:
> >  restorecon /var/run/libvirt/common/system.token
> >  ls -lZ /var/run/libvirt/common/system.token
> > -rw---. 1 root root system_u:object_r:virt_common_var_run_t:s0 32
> Jul  6 09:29 /var/run/libvirt/common/system.token
> > service libvirtd restart
> > service virtlogd restart
>
> On the source or destination?
>
>
Destination, 4.4.7

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ6R222IJRCV35YCJSGGTOW44AZUGDQZ/


[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Nir Soffer
On Tue, Jul 6, 2021 at 4:27 PM Sandro Bonazzola  wrote:

>>> This looks like the selinux issue we had in libvirt 7.4. Do we have the 
>>> latest
>>> selinux-policy-target package on the host?
>>
>>
>> Yeah, seems like https://bugzilla.redhat.com/show_bug.cgi?id=1964317
>
>
> In order to get the migration working this solved:
>  restorecon /var/run/libvirt/common/system.token
>  ls -lZ /var/run/libvirt/common/system.token
> -rw---. 1 root root system_u:object_r:virt_common_var_run_t:s0 32 Jul  6 
> 09:29 /var/run/libvirt/common/system.token
> service libvirtd restart
> service virtlogd restart

On the source or destination?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3NM6XWIVWE7NBSWWKHDD52OV5JUMYWNN/


[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Sandro Bonazzola
Il giorno mar 6 lug 2021 alle ore 15:18 Arik Hadas  ha
scritto:

>
>
>
> On Tue, Jul 6, 2021 at 3:56 PM Nir Soffer  wrote:
>
>> On Tue, Jul 6, 2021 at 3:36 PM Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> I update the hosted engine to 4.4.7 and one of the 2 nodes where the
>>> engine is running.
>>> Current status is:
>>> - Hosted engine at 4.4.7 running on Node 0
>>> - Node 0 at 4.4.6
>>> - Node 1 at 4.4.7
>>>
>>> Now, moving Node 0 to maintenance successfully moved the SPM from Node 0
>>> to Node 1 but while trying to migrate hosted engine I get on Node 0
>>> vdsm.log:
>>>
>> ...
>>
>>>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in 
>>> migrateToURI3
>>> raise libvirtError('virDomainMigrateToURI3() failed')
>>> libvirt.libvirtError: can't connect to virtlogd: Unable to open system 
>>> token /run/libvirt/common/system.token: Permission denied
>>>
>>>
>> This looks like the selinux issue we had in libvirt 7.4. Do we have the
>> latest
>> selinux-policy-target package on the host?
>>
>
> Yeah, seems like https://bugzilla.redhat.com/show_bug.cgi?id=1964317
>

In order to get the migration working this solved:
 restorecon /var/run/libvirt/common/system.token
 ls -lZ /var/run/libvirt/common/system.token
-rw---. 1 root root system_u:object_r:virt_common_var_run_t:s0 32 Jul
 6 09:29 /var/run/libvirt/common/system.token
service libvirtd restart
service virtlogd restart

+Lev Veyde  sounds like
https://bugzilla.redhat.com/show_bug.cgi?id=1955415 is not 100% fixed then


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MNHXVWKWKWLZRA4SVZEOZLJ7HOOMNNSF/


[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Sandro Bonazzola
Il giorno mar 6 lug 2021 alle ore 14:56 Nir Soffer  ha
scritto:

> On Tue, Jul 6, 2021 at 3:36 PM Sandro Bonazzola 
> wrote:
>
>> Hi,
>> I update the hosted engine to 4.4.7 and one of the 2 nodes where the
>> engine is running.
>> Current status is:
>> - Hosted engine at 4.4.7 running on Node 0
>> - Node 0 at 4.4.6
>> - Node 1 at 4.4.7
>>
>> Now, moving Node 0 to maintenance successfully moved the SPM from Node 0
>> to Node 1 but while trying to migrate hosted engine I get on Node 0
>> vdsm.log:
>>
> ...
>
>>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in 
>> migrateToURI3
>> raise libvirtError('virDomainMigrateToURI3() failed')
>> libvirt.libvirtError: can't connect to virtlogd: Unable to open system token 
>> /run/libvirt/common/system.token: Permission denied
>>
>>
> This looks like the selinux issue we had in libvirt 7.4. Do we have the
> latest
> selinux-policy-target package on the host?
>

Nir, this is a 4.4.6 host with
# rpm -qv libvirt
libvirt-7.0.0-14.el8s.x86_64
# rpm -qv vdsm
vdsm-4.40.60.7-1.el8.x86_64
# rpm -qv selinux-policy
selinux-policy-3.14.3-67.el8.noarch

which is migrating hosted engine VM to a just upgaded 4.4.7 node with:

# rpm -qv libvirt
libvirt-7.4.0-1.el8s.x86_64
# rpm -qv vdsm
vdsm-4.40.70.6-1.el8.x86_64
# rpm -qv selinux-policy
selinux-policy-3.14.3-71.el8.noarch

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GRULNVR44QBVJPDSLXO42MM6OC6TJCGC/


[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Arik Hadas
On Tue, Jul 6, 2021 at 3:56 PM Nir Soffer  wrote:

> On Tue, Jul 6, 2021 at 3:36 PM Sandro Bonazzola 
> wrote:
>
>> Hi,
>> I update the hosted engine to 4.4.7 and one of the 2 nodes where the
>> engine is running.
>> Current status is:
>> - Hosted engine at 4.4.7 running on Node 0
>> - Node 0 at 4.4.6
>> - Node 1 at 4.4.7
>>
>> Now, moving Node 0 to maintenance successfully moved the SPM from Node 0
>> to Node 1 but while trying to migrate hosted engine I get on Node 0
>> vdsm.log:
>>
> ...
>
>>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in 
>> migrateToURI3
>> raise libvirtError('virDomainMigrateToURI3() failed')
>> libvirt.libvirtError: can't connect to virtlogd: Unable to open system token 
>> /run/libvirt/common/system.token: Permission denied
>>
>>
> This looks like the selinux issue we had in libvirt 7.4. Do we have the
> latest
> selinux-policy-target package on the host?
>

Yeah, seems like https://bugzilla.redhat.com/show_bug.cgi?id=1964317
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IUMARKVV7KU3DNHS6YTN7EKO3NXVPIAZ/


[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Nir Soffer
On Tue, Jul 6, 2021 at 3:36 PM Sandro Bonazzola  wrote:

> Hi,
> I update the hosted engine to 4.4.7 and one of the 2 nodes where the
> engine is running.
> Current status is:
> - Hosted engine at 4.4.7 running on Node 0
> - Node 0 at 4.4.6
> - Node 1 at 4.4.7
>
> Now, moving Node 0 to maintenance successfully moved the SPM from Node 0
> to Node 1 but while trying to migrate hosted engine I get on Node 0
> vdsm.log:
>
...

>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in 
> migrateToURI3
> raise libvirtError('virDomainMigrateToURI3() failed')
> libvirt.libvirtError: can't connect to virtlogd: Unable to open system token 
> /run/libvirt/common/system.token: Permission denied
>
>
This looks like the selinux issue we had in libvirt 7.4. Do we have the
latest
selinux-policy-target package on the host?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWSOG2KKMYYASJJZVDG7SIVEXWL5P3XD/


[ovirt-users] Re: Any way to terminate stuck export task

2021-07-06 Thread Nir Soffer
On Tue, Jul 6, 2021 at 10:21 AM Gianluca Cecchi
 wrote:
>
> On Mon, Jul 5, 2021 at 5:06 PM Nir Soffer  wrote:
>
>>
>>
>> qemu-img is busy in posix_fallocate(), wiring one byte to every 4k block.
>>
>> If you add -tt -T (as I suggested), we can see how much time each write 
>> takes,
>> which may explain why this takes so much time.
>>
>> strace -f -p 14342 --tt -T
>>
>
> It seems I missed part of your suggestion... i didn't get the "-tt -T" (or I 
> didn't see it...)
>
> With it I get this during the export (in networking of host console 4 
> mbit/s):
>
> # strace -f -p 25243 -tt -T
> strace: Process 25243 attached with 2 threads
> [pid 25243] 09:17:32.503907 ppoll([{fd=9, events=POLLIN|POLLERR|POLLHUP}], 1, 
> NULL, NULL, 8 
> [pid 25244] 09:17:32.694207 pwrite64(12, "\0", 1, 3773509631) = 1 <0.59>
> [pid 25244] 09:17:32.694412 pwrite64(12, "\0", 1, 3773513727) = 1 <0.78>
> [pid 25244] 09:17:32.694608 pwrite64(12, "\0", 1, 3773517823) = 1 <0.56>
> [pid 25244] 09:17:32.694729 pwrite64(12, "\0", 1, 3773521919) = 1 <0.24>
> [pid 25244] 09:17:32.694796 pwrite64(12, "\0", 1, 3773526015) = 1 <0.20>
> [pid 25244] 09:17:32.694855 pwrite64(12, "\0", 1, 3773530111) = 1 <0.15>
> [pid 25244] 09:17:32.694908 pwrite64(12, "\0", 1, 3773534207) = 1 <0.14>
> [pid 25244] 09:17:32.694950 pwrite64(12, "\0", 1, 3773538303) = 1 <0.16>
> [pid 25244] 09:17:32.694993 pwrite64(12, "\0", 1, 3773542399) = 1 <0.200032>
> [pid 25244] 09:17:32.895140 pwrite64(12, "\0", 1, 3773546495) = 1 <0.34>
> [pid 25244] 09:17:32.895227 pwrite64(12, "\0", 1, 3773550591) = 1 <0.29>
> [pid 25244] 09:17:32.895296 pwrite64(12, "\0", 1, 3773554687) = 1 <0.24>
> [pid 25244] 09:17:32.895353 pwrite64(12, "\0", 1, 3773558783) = 1 <0.16>
> [pid 25244] 09:17:32.895400 pwrite64(12, "\0", 1, 3773562879) = 1 <0.15>
> [pid 25244] 09:17:32.895443 pwrite64(12, "\0", 1, 3773566975) = 1 <0.15>
> [pid 25244] 09:17:32.895485 pwrite64(12, "\0", 1, 3773571071) = 1 <0.15>
> [pid 25244] 09:17:32.895527 pwrite64(12, "\0", 1, 3773575167) = 1 <0.17>
> [pid 25244] 09:17:32.895570 pwrite64(12, "\0", 1, 3773579263) = 1 <0.199493>
> [pid 25244] 09:17:33.095147 pwrite64(12, "\0", 1, 3773583359) = 1 <0.31>
> [pid 25244] 09:17:33.095262 pwrite64(12, "\0", 1, 3773587455) = 1 <0.61>
> [pid 25244] 09:17:33.095378 pwrite64(12, "\0", 1, 3773591551) = 1 <0.27>
> [pid 25244] 09:17:33.095445 pwrite64(12, "\0", 1, 3773595647) = 1 <0.21>
> [pid 25244] 09:17:33.095498 pwrite64(12, "\0", 1, 3773599743) = 1 <0.16>
> [pid 25244] 09:17:33.095542 pwrite64(12, "\0", 1, 3773603839) = 1 <0.14>

Most writes are pretty fast, but from time to time there is a very slow write.

From the small sample you posted, we have:

awk '{print $11}' strace.out | sed -e "s///" | awk
'{sum+=$1; if ($1 < 0.1) {fast+=$1; fast_nr++} else {slow+=$1;
slow_nr++}} END{printf "average: %.6f slow: %.6f fast: %.6f\n",
sum/NR, slow/slow_nr, fast/fast_nr}'
average: 0.016673 slow: 0.199763 fast: 0.28

Preallocating a 300 GiB disk will take about 15 days :-)

>>> 300*1024**3 / 4096 * 0.016673 / 3600 / 24
15.176135

If all writes would be fast, it will take less than an hour:

>>> 300*1024**3 / 4096 * 0.28 / 3600
0.61166933

> . . .
>
> BTW: it seems my NAS appliance doesn't support 4.2 version of NFS, because if 
> I force it, I then get an error in mount and in engine.log this error for 
> both nodes as they try to mount:
>
> 2021-07-05 17:01:56,082+02 ERROR 
> [org.ovirt.engine.core.bll.storage.connection.FileStorageHelper] 
> (EE-ManagedThreadFactory-engine-Thread-2554190) [642eb6be] The connection 
> with details '172.16.1.137:/nas/EXPORT-DOMAIN' failed because of error code 
> '477' and error message is: problem while trying to mount target
>
>
> and in vdsm.log:
> MountError: (32, ';mount.nfs: Protocol not supported\n')

Too bad.

You can evaluate how ovirt 4.4. will work with this appliance using
this dd command:

dd if=/dev/zero bs=8M count=38400 of=/path/to/new/disk
oflag=direct conv=fsync

We don't use dd for this, but the operation is the same on NFS < 4.2.

Based on the 50 MiB/s rate you reported earlier, I guess you have a
1Gbit network to
this appliance, so zeroing can do up to 128 MiB/s, which will take
about 40 minutes
for 300G.

Using NFS 4.2, fallocate will complete in less than a second.

Here is example from my test system, creating 90g raw preallocated volume:

2021-07-06 15:46:40,382+0300 INFO  (tasks/1) [storage.Volume] Request
to create RAW volume /rhev/data-center/mnt/storage2:_exp
ort_00/a600ba04-34f9-4793-a5dc-6d4150716d14/images/bcf7c623-8fd8-47b3-aaee-a65c0872536d/82def38d-b41b-4126-826e-0513d669f1b5
with capacity = 96636764160 (fileVolume:493)
...
2021-07-06 15:46:40,447+0300 INFO  (tasks/1) [storage.Volume]
Preallocating volume
/rhev/data-center/mnt/storage2:_export_00/a600ba04-34f9-4793-a5dc-6d4150716d14/images/bcf7c623-8fd8-47b3-aaee-a65c0872536d/82def38d-b41b-4126-826e-0513d669

[ovirt-users] Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host

2021-07-06 Thread Sandro Bonazzola
Hi,
I update the hosted engine to 4.4.7 and one of the 2 nodes where the engine
is running.
Current status is:
- Hosted engine at 4.4.7 running on Node 0
- Node 0 at 4.4.6
- Node 1 at 4.4.7

Now, moving Node 0 to maintenance successfully moved the SPM from Node 0 to
Node 1 but while trying to migrate hosted engine I get on Node 0 vdsm.log:

2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START
repoStats(domains=()) from=:::10.46.8.133,35048,
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:48)
2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH
repoStats return={'1996dc3b-d33f-49cb-b32a-8f7b1d50af5e': {'code': 0,
'lastCheck': '3.0', 'delay': '0.00114065', 'valid': True, 'version':
5, 'acq
uired': True, 'actual': True}} from=:::10.46.8.133,35048,
task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:54)
2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] START
multipath_health() from=:::10.46.8.133,35048,
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:48)
2021-07-06 12:25:07,882+ INFO  (jsonrpc/5) [vdsm.api] FINISH
multipath_health return={} from=:::10.46.8.133,35048,
task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:54)
2021-07-06 12:25:07,883+ ERROR (migsrc/b2072331) [virt.vm]
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') can't connect to
virtlogd: Unable to open system token
/run/libvirt/common/system.token: Permission de
nied (migration:294)
2021-07-06 12:25:07,888+ INFO  (jsonrpc/5) [api.host] FINISH
getStats return={'status': {'code': 0, 'message': 'Done'}, 'info':
(suppressed)} from=:::10.46.8.133,35048 (api:54)
2021-07-06 12:25:08,166+ ERROR (migsrc/b2072331) [virt.vm]
(vmId='b2072331-1558-4186-86b4-fa83af8eba95') Failed to migrate
(migration:467)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
441, in _regular_run
time.time(), machineParams
  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
537, in _startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
626, in _perform_with_conv_schedule
self._perform_migration(duri, muri)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
555, in _perform_migration
self._migration_flags)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line
159, in call
return getattr(self._vm._dom, name)(*a, **kw)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
line 94, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in
migrateToURI3
raise libvirtError('virDomainMigrateToURI3() failed')
libvirt.libvirtError: can't connect to virtlogd: Unable to open system
token /run/libvirt/common/system.token: Permission denied
2021-07-06 12:25:08,197+ INFO  (jsonrpc/6) [api.virt] START
getMigrationStatus() from=:::10.46.8.133,35048, flow_id=4e86b85d,
vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:48)
2021-07-06 12:25:08,197+ INFO  (jsonrpc/6) [api.virt] FINISH
getMigrationStatus return={'status': {'code': 0, 'message': 'Done'},
'migrationStats': {'status': {'code': 12, 'message': 'Fatal error
during migr
ation'}, 'progress': 0}} from=:::10.46.8.133,35048,
flow_id=4e86b85d, vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:54)

On node 0:
# ls -lZ /run/libvirt/common/system.token
ls: cannot access '/run/libvirt/common/system.token': No such file or
directory

On node 1:
# ls -lZ /run/libvirt/common/system.token
-rw---. 1 root root system_u:object_r:virt_var_run_t:s0 32 Jul  6 09:29
/run/libvirt/common/system.token

any clue?
-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BTSW2RTA4UJHIEBTSMLPSKUPWF2QUNB4/


[ovirt-users] Re: How to Upgrade Node with Local Storage ?

2021-07-06 Thread Nur Imam Febrianto
Upgrade Script failed in yum. When I use nodectl info, the new ovirt-ng image 
was not installed at all.
Where should I look to debug this ? I can’t afford to lose any data on this 
local storage domain if I want to upgrade.

Thanks.

Sent from Mail for Windows 10

From: Vojtech Juranek
Sent: 22 April 2021 12:57
To: users@ovirt.org
Subject: [ovirt-users] Re: How to Upgrade Node with Local Storage ?

On Wednesday, 21 April 2021 16:40:29 CEST Nur Imam Febrianto wrote:
> Set global maintenance and then turn off all vm, do yum update but it
> completed with failed. Am I missing something ?

can you share the details? What failed, wthat was the error?


> From: Adam Xu
> Sent: 20 April 2021 7:36
> To: users@ovirt.org
> Subject: [ovirt-users] Re: How to Upgrade Node with Local Storage ?
>
>
> For  oVirt Node that using Local Storage, I think you should shutdown all
> your vms before you upgrade the Node. 在 2021/4/19 22:09, Nur Imam Febrianto
> 写道:
> Hi,
>
> How we can upgrade oVirt Node that using Local Storage ? Seems I cant find
> any good documentation about this. Planning to upgrade one 4.4.4 node with
> local storage to 4.4.5.
> Thanks before.
>
> Regards,
> Nur Imam Febrianto
>
>
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to
> users-le...@ovirt.org
>
> Privacy Statement:
> https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fprivacy-policy.html&data=04%7C01%7C%7Cfa26354c801d4d135cd108d905538ec5%7C84df9e7fe9f640afb435%7C1%7C0%7C637546678718469017%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=XswjVMjNUnn3rY2aag1zEAvpmVkrC1vm01RdG%2BLHFjQ%3D&reserved=0
> on.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fprivacy-policy.html&data=0
> 4%7C01%7C%7C2817620cd2134104f81908d90394654e%7C84df9e7fe9f640afb435a
> aaa%7C1%7C0%7C637544758156873163%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDA
> iLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=VV663qDHl6C24C0
> XTZLoUF4n%2Bm4umP4dkG1EsKS895E%3D&reserved=0>
>
> oVirt Code of Conduct:
> https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2F&data=04%7C01%7C%7Cfa26354c801d4d135cd108d905538ec5%7C84df9e7fe9f640afb435%7C1%7C0%7C637546678718469017%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=LfYC2Y706Kn01gv%2BB5KauPey1ldEFHvM%2F%2BcM1tcLVxA%3D&reserved=0.
> safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunit
> y%2Fabout%2Fcommunity-guidelines%2F&data=04%7C01%7C%7C2817620cd2134104f81908
> d90394654e%7C84df9e7fe9f640afb435%7C1%7C0%7C637544758156883118%7
> CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiL
> CJXVCI6Mn0%3D%7C1000&sdata=XF1vC3JCyVjQCsD1x%2FDQinfKOImw5VJbZIryDg9sKKg%3D&
> reserved=0>
>
> List Archives:
> https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FZOCHCAJPXO4LT&data=04%7C01%7C%7Cfa26354c801d4d135cd108d905538ec5%7C84df9e7fe9f640afb4

[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Nir Soffer
On Tue, Jul 6, 2021 at 2:29 PM Sandro Bonazzola  wrote:

>
>
> Il giorno mar 6 lug 2021 alle ore 13:03 Nir Soffer 
> ha scritto:
>
>> On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet 
>> wrote:
>> > We are installing UPS powerchute client on hypervisors.
>> >
>> > What is the default vms behaviour of running vms when an hypervisor is
>> > ordered to shutdown: do the vms live migrate or do they shutdown
>> > properly (even the restart on an other host because of HA) ?
>>
>> In general VMs are not restarted after an unexpected shutdown, but HA VMs
>> are restarted after failures.
>>
>> If the HA VM has a lease, it can restart safely on another host
>> regardless of
>> the original host status. If the HA VM does not have a lease, the system
>> must
>> wait until the original host is up again to check if the VM is still
>> running on this
>> host.
>>
>> Arik can add more details on this.
>>
>
> I think the question is not related to what happens after the host is back.
> I think the question is what happens when the host goes down.
> To me, the right way to shutdown a host is putting it first to maintenance
> (VM evacuate to other hosts) and then shutdown.
>

Right, but the we don't have integration with the UPS, so engine cannot put
the host
to maintenance when the host lose power and the UPS will shut it down after
few minutes.


> On emergency shutdown without moving the host to maintenance first I think
> libvirt is communicating the host is going down to the guests and tries to
> cleanly shutdown vms while the host is going down.
> Arik please confirm :-)
>
>
>
>>
>> Nir
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXVXSLXQYZX6CQPJNXKWLOMY3LQU7XJ5/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7Q7XXOL3JXL2L4MP6G2Q7OJLKLBEZFVP/


[ovirt-users] Re: CentOS 8 to Stream conversion

2021-07-06 Thread Sandro Bonazzola
Il giorno mar 29 giu 2021 alle ore 19:40 Gary Pedretty 
ha scritto:

> I have moved all my hosts from CentOS 8 to Stream, but the hosted engine
> is still Centos 8.  Is it safe to use the same conversion of the
> hosted-engine from 8 to Stream while in global maintenance mode?
>

Yes. I would recommend to run engine-setup again after the distro-sync but
shouldn't be necessary unless engine got updates as well.


>
> IE:
>
> dnf swap centos-{linux,stream}-repos
>
> dnf distro-sync
>
>
> Thanks
>
> Gary
>
>
>
>
>
>
> ___
> Gary Pedretty
> IT Manager
> Ravn Alaska
>
> Office: 907-266-8451
> Mobile: 907-388-2247
> Email: gary.pedre...@ravnalaska.com
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/J2JZGHUI3ZYCLO6RPETFB3CUIFPM7LUM/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6322SPA66JAU4X6FSGZQDI25VHRDH3R/


[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Sandro Bonazzola
Il giorno mar 6 lug 2021 alle ore 13:45 Klaas Demter 
ha scritto:

> This should be implemented in 4.2.3 and newer. There are more details in
> the RFE bugzilla:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1334982
>
> I have never tested it :)
>

Corresponding libvirt bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1401054



>
>
>
> On 7/6/21 1:29 PM, Sandro Bonazzola wrote:
>
>
>
> Il giorno mar 6 lug 2021 alle ore 13:03 Nir Soffer 
> ha scritto:
>
>> On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet 
>> wrote:
>> > We are installing UPS powerchute client on hypervisors.
>> >
>> > What is the default vms behaviour of running vms when an hypervisor is
>> > ordered to shutdown: do the vms live migrate or do they shutdown
>> > properly (even the restart on an other host because of HA) ?
>>
>> In general VMs are not restarted after an unexpected shutdown, but HA VMs
>> are restarted after failures.
>>
>> If the HA VM has a lease, it can restart safely on another host
>> regardless of
>> the original host status. If the HA VM does not have a lease, the system
>> must
>> wait until the original host is up again to check if the VM is still
>> running on this
>> host.
>>
>> Arik can add more details on this.
>>
>
> I think the question is not related to what happens after the host is back.
> I think the question is what happens when the host goes down.
> To me, the right way to shutdown a host is putting it first to maintenance
> (VM evacuate to other hosts) and then shutdown.
> On emergency shutdown without moving the host to maintenance first I think
> libvirt is communicating the host is going down to the guests and tries to
> cleanly shutdown vms while the host is going down.
> Arik please confirm :-)
>
>
>
>>
>> Nir
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXVXSLXQYZX6CQPJNXKWLOMY3LQU7XJ5/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQFLN7YMFKZIMC6COWSG6COKHKTESOIY/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CG2MGW2VBX255JWJMG6B6TVTY7FUWXIX/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O3TSOPL7FVRVC4G7L6W7YXHZ55KECLVG/


[ovirt-users] Set fixed VNC/Spice Password for VMs.

2021-07-06 Thread Merlin Timm

Good day to all,

I have a question about the console configuration of the VMs:

By default, for each console connection to a VM, a password is set for 
120 seconds, after that you can't use it again. We currently have the 
following concern:


We want to access and control the VMs via the VNC/Spice of the Ovirt 
host. We have already tried to use the password from the console.vv for 
the connection and that works so far. Unfortunately we have to do this 
every 2 minutes when we want to connect again. We are currently building 
an automatic test pipeline and for this we need to access the VMs 
remotely before OS start and we want to be independent of a VNC server 
on the guest. This is only possible if we could connect to the VNC/Spice 
server from the Ovirt host.


My question: would it be possible to fix the password or read it out via 
api every time you want to connect?


I would appreciate a reply very much!

Best regards
Merlin Timm
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDPGLBQ4DWE64NATDDFDUB2TZLAHS6SV/


[ovirt-users] Re: Windows 10 Guest with nvidia Tesla P40 vGpu Bluescreens when I enable nested virtualization

2021-07-06 Thread Sandro Bonazzola
Can you please report the issue with the kernel dump to
https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%208&component=kernel
and select version "CentOS Stream" ?


Il giorno mer 30 giu 2021 alle ore 05:59  ha scritto:

> Hi all,
>
> I have this issue when I go to create a VM, I want to run WSL2 but when I
> install it or windows defender application guard the computer bluescreens.
>
>
> So I am using CentOS 8 stream host connected to a VirtIO cluster the
> hardware is 2 Nvidia Tesla P40 graphics cards installed on a poweredge
> R480. My windows 10 20H2 Image would bluescreen after the nvidia drivers
> are installed. I did some testing and discovered after installing a clean
> ISO windows 10 20H2 that it worked fine even after installing the drivers
> for the vGPU but the master image does have WSL2 installed out of the box
> so I installed it on my Vanilla image and it would bluescreen. I imagine
> this is due to the nested virtualization and the installation of some
> hyper-v service maybe it is trying to initialize some hardware
> acceleration. Has anyone had this issue I really need WSL and would love to
> use version 2.
>
>
> Tesla Version I have tried is 450.12 and the latest 460.32
>
>
> Kernel Bitmap Dump File: Kernel address space is available, User address
> space may not be available.
>
>
> Symbol search path is: srv*
>
> Executable search path is:
>
> Windows 10 Kernel Version 19041 MP (4 procs) Free x64
>
> Product: WinNt, suite: TerminalServer SingleUserTS
>
> Built by: 19041.1.amd64fre.vb_release.191206-1406
>
> Machine Name:
>
> Kernel base = 0xf805`5e80 PsLoadedModuleList = 0xf805`5f42a490
>
> Debug session time: Sat Jun 26 04:58:37.807 2021 (UTC - 7:00)
>
> System Uptime: 0 days 0:00:09.469
>
> Loading Kernel Symbols
>
> ...
>
> ...Page 799b not present in the dump file. Type
> ".hh dbgerr004" for details
>
> .
>
> .
>
> Loading User Symbols
>
> PEB is paged out (Peb.Ldr = 002e`f91fa018). Type ".hh dbgerr001" for
> details
>
> Loading unloaded module list
>
> ...
>
> For analysis of this file, run !analyze -v
>
> 3: kd> !analyze -v
>
> ERROR: FindPlugIns 8007007b
>
>
> ***
>
> * *
>
> * Bugcheck Analysis *
>
> * *
>
>
> ***
>
>
> SYSTEM_SERVICE_EXCEPTION (3b)
>
> An exception happened while executing a system service routine.
>
> Arguments:
>
> Arg1: c005, Exception code that caused the bugcheck
>
> Arg2: f80567d55b24, Address of the instruction which caused the
> bugcheck
>
> Arg3: 8487974646a0, Address of the context record for the exception
> that caused the bugcheck
>
> Arg4: , zero.
>
>
> Debugging Details:
>
> --
>
>
> Page fd6196 not present in the dump file. Type ".hh dbgerr004" for details
>
> Page fd6196 not present in the dump file. Type ".hh dbgerr004" for details
>
>
> KEY_VALUES_STRING: 1
>
>
> Key : Analysis.CPU.Sec
>
> Value: 3
>
>
> Key : Analysis.DebugAnalysisProvider.CPP
>
> Value: Create: 8007007e on BCCO050
>
>
> Key : Analysis.DebugData
>
> Value: CreateObject
>
>
> Key : Analysis.DebugModel
>
> Value: CreateObject
>
>
> Key : Analysis.Elapsed.Sec
>
> Value: 27
>
>
> Key : Analysis.Memory.CommitPeak.Mb
>
> Value: 81
>
>
> Key : Analysis.System
>
> Value: CreateObject
>
>
>
> BUGCHECK_CODE: 3b
>
>
> BUGCHECK_P1: c005
>
>
> BUGCHECK_P2: f80567d55b24
>
>
> BUGCHECK_P3: 8487974646a0
>
>
> BUGCHECK_P4: 0
>
>
> CONTEXT: 8487974646a0 -- (.cxr 0x8487974646a0)
>
> rax= rbx=b406f7d85000 rcx=e17f3f1e8efe
>
> rdx= rsi= rdi=b406f1667270
>
> rip=f80567d55b24 rsp=8487974650a0 rbp=0002
>
> r8= r9= r10=
>
> r11=848797465040 r12=f80568511c80 r13=b406f1667270
>
> r14=b406f194c660 r15=
>
> iopl=0 nv up ei pl nz na pe nc
>
> cs=0010 ss=0018 ds=002b es=002b fs=0053 gs=002b efl=00050202
>
> nvlddmkm+0x1d5b24:
>
> f805`67d55b24 4c8b80c822 mov r8,qword ptr [rax+22C8h]
> ds:002b:`22c8=
>
> Resetting default scope
>
>
> PROCESS_NAME: csrss.exe
>
>
> STACK_TEXT:
>
> 8487`974650a0 f805`67cab38a : b406`f7d85000 b406`f194c660
> ` 8487`97465220 : nvlddmkm+0x1d5b24
>
> 8487`974650e0 f805`67cdb663 : ` b406`f7d85000
> 8487`97465220 b406`f1989000 : nvlddmkm+0x12b38a
>
> 8487`97465160 f805`67cb5df8 : b406`f7d85000 b406`f7d85000
> b406`f7d85000 8487`97465220 : nvlddmkm+0x15b663
>
> 8487`97465190 f805`67ca3ce5 : b406`f1a3b000 `0001
> b406`0001 b406`f7d85000 : nvlddmkm+0x1

[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Klaas Demter
This should be implemented in 4.2.3 and newer. There are more details in 
the RFE bugzilla:


https://bugzilla.redhat.com/show_bug.cgi?id=1334982

I have never tested it :)




On 7/6/21 1:29 PM, Sandro Bonazzola wrote:



Il giorno mar 6 lug 2021 alle ore 13:03 Nir Soffer > ha scritto:


On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet
mailto:blanc...@abes.fr>> wrote:
> We are installing UPS powerchute client on hypervisors.
>
> What is the default vms behaviour of running vms when an
hypervisor is
> ordered to shutdown: do the vms live migrate or do they shutdown
> properly (even the restart on an other host because of HA) ?

In general VMs are not restarted after an unexpected shutdown, but
HA VMs
are restarted after failures.

If the HA VM has a lease, it can restart safely on another host
regardless of
the original host status. If the HA VM does not have a lease, the
system must
wait until the original host is up again to check if the VM is still
running on this
host.

Arik can add more details on this.


I think the question is not related to what happens after the host is 
back.

I think the question is what happens when the host goes down.
To me, the right way to shutdown a host is putting it first to 
maintenance (VM evacuate to other hosts) and then shutdown.
On emergency shutdown without moving the host to maintenance first I 
think libvirt is communicating the host is going down to the guests 
and tries to cleanly shutdown vms while the host is going down.

Arik please confirm :-)


Nir
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXVXSLXQYZX6CQPJNXKWLOMY3LQU7XJ5/





--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com 

 

*Red Hat respects your work life balance. Therefore there is no need 
to answer this email out of your office hours.

*
*

*

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQFLN7YMFKZIMC6COWSG6COKHKTESOIY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CG2MGW2VBX255JWJMG6B6TVTY7FUWXIX/


[ovirt-users] Re: migrate hosted engine

2021-07-06 Thread Sandro Bonazzola
Hi,
can you please provide:
`yum -q list installed centos-release ovirt-release\* ovirt-engine
redhat-release vdsm glusterfs` output on the two host and on the engine ?

Can you also provide vdsm.log from the two host and engine.log from the
engine?

Ideally, it would be great if you can share a ovirt-log-collector generated
report.



Il giorno gio 1 lug 2021 alle ore 18:55 Harry O  ha
scritto:

> When I put in the "/etc/pki/CA/cacert.pem" from the old node, I just get
> next error as follows:
> Migration failed due to an Error: Fatal error during migration (VM:
> HostedEngine, Source: hej1.5ervers.lan, Destination: hej2.5ervers.lan).
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFPPS4WXSRGM54VLJDRYF723RSTPNBOO/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EWDAMTM2ELNAVGB54OH46YFCKAT7OK3V/


[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Sandro Bonazzola
Il giorno mar 6 lug 2021 alle ore 13:03 Nir Soffer  ha
scritto:

> On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet 
> wrote:
> > We are installing UPS powerchute client on hypervisors.
> >
> > What is the default vms behaviour of running vms when an hypervisor is
> > ordered to shutdown: do the vms live migrate or do they shutdown
> > properly (even the restart on an other host because of HA) ?
>
> In general VMs are not restarted after an unexpected shutdown, but HA VMs
> are restarted after failures.
>
> If the HA VM has a lease, it can restart safely on another host regardless
> of
> the original host status. If the HA VM does not have a lease, the system
> must
> wait until the original host is up again to check if the VM is still
> running on this
> host.
>
> Arik can add more details on this.
>

I think the question is not related to what happens after the host is back.
I think the question is what happens when the host goes down.
To me, the right way to shutdown a host is putting it first to maintenance
(VM evacuate to other hosts) and then shutdown.
On emergency shutdown without moving the host to maintenance first I think
libvirt is communicating the host is going down to the guests and tries to
cleanly shutdown vms while the host is going down.
Arik please confirm :-)



>
> Nir
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXVXSLXQYZX6CQPJNXKWLOMY3LQU7XJ5/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQFLN7YMFKZIMC6COWSG6COKHKTESOIY/


[ovirt-users] Re: what happens to vms when a host shutdowns?

2021-07-06 Thread Nir Soffer
On Tue, Jul 6, 2021 at 1:11 PM Nathanaël Blanchet  wrote:
> We are installing UPS powerchute client on hypervisors.
>
> What is the default vms behaviour of running vms when an hypervisor is
> ordered to shutdown: do the vms live migrate or do they shutdown
> properly (even the restart on an other host because of HA) ?

In general VMs are not restarted after an unexpected shutdown, but HA VMs
are restarted after failures.

If the HA VM has a lease, it can restart safely on another host regardless of
the original host status. If the HA VM does not have a lease, the system must
wait until the original host is up again to check if the VM is still
running on this
host.

Arik can add more details on this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXVXSLXQYZX6CQPJNXKWLOMY3LQU7XJ5/


[ovirt-users] what happens to vms when a host shutdowns?

2021-07-06 Thread Nathanaël Blanchet

Hi,

We are installing UPS powerchute client on hypervisors.

What is the default vms behaviour of running vms when an hypervisor is 
ordered to shutdown: do the vms live migrate or do they shutdown 
properly (even the restart on an other host because of HA) ?


--
Nathanaël Blanchet

Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HMZZOD6W654VIGBEDYKJ5QQQEM5MWGL4/


[ovirt-users] oVirt 4.4.7 is now generally available

2021-07-06 Thread Sandro Bonazzola
oVirt 4.4.7 is now generally available

The oVirt project is excited to announce the general availability of oVirt
4.4.7 , as of July 6th, 2021.

This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade

Please note that oVirt 4.4 only supports clusters and data centers with
compatibility version 4.2 and above. If clusters or data centers are
running with an older compatibility version, you need to upgrade them to at
least 4.2 (4.3 is recommended).

Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.

For example, the megaraid_sas driver is removed. If you use Enterprise
Linux 8 hosts you can try to provide the necessary drivers for the
deprecated hardware using the DUD method (See the users’ mailing list
thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXEFJNPHOXUH4HOOWRIRSB4/
)

Documentation

   -

   If you want to try oVirt as quickly as possible, follow the instructions
   on the Download  page.
   -

   For complete installation, administration, and usage instructions, see
   the oVirt Documentation .
   -

   For upgrading from a previous version, see the oVirt Upgrade Guide
   .
   -

   For a general overview of oVirt, see About oVirt
   .

What’s new in oVirt 4.4.7 Release?

This update is the seventh in a series of stabilization updates to the 4.4
series.

This release is available now on x86_64 architecture for:

   -

   Red Hat Enterprise Linux 8.4
   -

   CentOS Linux (or similar) 8.4
   -

   CentOS Stream 8


This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

   -

   Red Hat Enterprise Linux 8.4
   -

   CentOS Linux (or similar) 8.4
   -

   oVirt Node NG (based on CentOS Stream 8)
   -

   CentOS Stream



oVirt Node and Appliance have been updated, including:

   -

   oVirt 4.4.7: https://www.ovirt.org/release/4.4.7/
   -

   CentOS Stream 8 latest updates
   -

   Ansible 2.9.23:
   
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v2.9.rst#v2-9-23

   -

   Advanced Virtualization 8.4.0.1
   -

   Gluster 8.5: https://docs.gluster.org/en/latest/release-notes/8.5/
   -

   Wildfly 23.0.2:
   https://www.wildfly.org/news/2021/04/29/WildFly2302-Released/



See the release notes [1] for installation instructions and a list of new
features and bugs fixed.

Notes:

   -

   oVirt Appliance is already available for CentOS Stream 8
   -

   oVirt Node NG is already available for CentOS Stream 8


Additional resources:

   -

   Read more about the oVirt 4.4.7 release highlights:
   https://www.ovirt.org/release/4.4.7/
   
   -

   Get more oVirt project updates on Twitter: https://twitter.com/ovirt
   -

   Check out the latest project news on the oVirt blog:
   https://blogs.ovirt.org/


[1] https://www.ovirt.org/release/4.4.7/
[2] https://resources.ovirt.org/pub/ovirt-4.4/iso/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5RI7BRS2V2VBZ7N34FRR6NPL2B22FKGI/


[ovirt-users] Re: Any way to terminate stuck export task

2021-07-06 Thread Gianluca Cecchi
On Mon, Jul 5, 2021 at 5:06 PM Nir Soffer  wrote:


>
> qemu-img is busy in posix_fallocate(), wiring one byte to every 4k block.
>
> If you add -tt -T (as I suggested), we can see how much time each write
> takes,
> which may explain why this takes so much time.
>
> strace -f -p 14342 --tt -T
>
>
It seems I missed part of your suggestion... i didn't get the "-tt -T" (or
I didn't see it...)

With it I get this during the export (in networking of host console 4
mbit/s):

# strace -f -p 25243 -tt -T
strace: Process 25243 attached with 2 threads
[pid 25243] 09:17:32.503907 ppoll([{fd=9, events=POLLIN|POLLERR|POLLHUP}],
1, NULL, NULL, 8 
[pid 25244] 09:17:32.694207 pwrite64(12, "\0", 1, 3773509631) = 1 <0.59>
[pid 25244] 09:17:32.694412 pwrite64(12, "\0", 1, 3773513727) = 1 <0.78>
[pid 25244] 09:17:32.694608 pwrite64(12, "\0", 1, 3773517823) = 1 <0.56>
[pid 25244] 09:17:32.694729 pwrite64(12, "\0", 1, 3773521919) = 1 <0.24>
[pid 25244] 09:17:32.694796 pwrite64(12, "\0", 1, 3773526015) = 1 <0.20>
[pid 25244] 09:17:32.694855 pwrite64(12, "\0", 1, 3773530111) = 1 <0.15>
[pid 25244] 09:17:32.694908 pwrite64(12, "\0", 1, 3773534207) = 1 <0.14>
[pid 25244] 09:17:32.694950 pwrite64(12, "\0", 1, 3773538303) = 1 <0.16>
[pid 25244] 09:17:32.694993 pwrite64(12, "\0", 1, 3773542399) = 1 <0.200032>
[pid 25244] 09:17:32.895140 pwrite64(12, "\0", 1, 3773546495) = 1 <0.34>
[pid 25244] 09:17:32.895227 pwrite64(12, "\0", 1, 3773550591) = 1 <0.29>
[pid 25244] 09:17:32.895296 pwrite64(12, "\0", 1, 3773554687) = 1 <0.24>
[pid 25244] 09:17:32.895353 pwrite64(12, "\0", 1, 3773558783) = 1 <0.16>
[pid 25244] 09:17:32.895400 pwrite64(12, "\0", 1, 3773562879) = 1 <0.15>
[pid 25244] 09:17:32.895443 pwrite64(12, "\0", 1, 3773566975) = 1 <0.15>
[pid 25244] 09:17:32.895485 pwrite64(12, "\0", 1, 3773571071) = 1 <0.15>
[pid 25244] 09:17:32.895527 pwrite64(12, "\0", 1, 3773575167) = 1 <0.17>
[pid 25244] 09:17:32.895570 pwrite64(12, "\0", 1, 3773579263) = 1 <0.199493>
[pid 25244] 09:17:33.095147 pwrite64(12, "\0", 1, 3773583359) = 1 <0.31>
[pid 25244] 09:17:33.095262 pwrite64(12, "\0", 1, 3773587455) = 1 <0.61>
[pid 25244] 09:17:33.095378 pwrite64(12, "\0", 1, 3773591551) = 1 <0.27>
[pid 25244] 09:17:33.095445 pwrite64(12, "\0", 1, 3773595647) = 1 <0.21>
[pid 25244] 09:17:33.095498 pwrite64(12, "\0", 1, 3773599743) = 1 <0.16>
[pid 25244] 09:17:33.095542 pwrite64(12, "\0", 1, 3773603839) = 1 <0.14>
. . .

BTW: it seems my NAS appliance doesn't support 4.2 version of NFS, because
if I force it, I then get an error in mount and in engine.log this error
for both nodes as they try to mount:

2021-07-05 17:01:56,082+02 ERROR
[org.ovirt.engine.core.bll.storage.connection.FileStorageHelper]
(EE-ManagedThreadFactory-engine-Thread-2554190) [642eb6be] The connection
with details '172.16.1.137:/nas/EXPORT-DOMAIN' failed because of error code
'477' and error message is: problem while trying to mount target


and in vdsm.log:
MountError: (32, ';mount.nfs: Protocol not supported\n')

With NFSv3 I get apparently the same command:

vdsm 19702  3036  7 17:15 ?00:00:02 /usr/bin/qemu-img convert
-p -t none -T none -f raw
/rhev/data-center/mnt/blockSD/679c0725-75fb-4af7-bff1-7c447c5d789c/images/530b3e7f-4ce4-4051-9cac-1112f5f9e8b5/d2a89b5e-7d62-4695-96d8-b762ce52b379
-O raw -o preallocation=falloc /rhev/data-center/mnt/172.16.1.137:
_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/530b3e7f-4ce4-4051-9cac-1112f5f9e8b5/d2a89b5e-7d62-4695-96d8-b762ce52b379

The file size seems bigger but anyway very low throughput as with NFS v4.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QBPOJTMVBV6DXBYK4DDG3CX3SCJM54IZ/