[ovirt-users] Re: CPU support Xeon E5345

2023-08-10 Thread Mikhail Po
Thank you for such a detailed answer. Also, reading this forum, I found options 
with changing the Ansible code. But without clear instructions and any results.
The option of installing oVirt on a supported processor and then installing an 
unsupported one is interesting, but time-consuming.
If you have any experience, knowledge about what and where in the code needs to 
be corrected - I would be very grateful to you :)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EPGDLMQSX7K6PKQPAX4TAVWW4N3CXYHG/


[ovirt-users] Re: oVirt 4.5.5 snapshot - Migration failed due to an Error: Fatal error during migration

2023-08-10 Thread Jorge Visentini
This is the engine log (systemctl restart ovirt-engine)

2023-08-10 19:26:09,996-03 INFO  [org.jboss.as.server.deployment] (MSC
service thread 1-7) WFLYSRV0207: Starting subdeployment (runtime-name:
"bll.jar")
2023-08-10 19:26:09,996-03 INFO  [org.jboss.as.server.deployment] (MSC
service thread 1-8) WFLYSRV0207: Starting subdeployment (runtime-name:
"root.war")
2023-08-10 19:26:09,996-03 INFO  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0207: Starting subdeployment (runtime-name:
"webadmin.war")
2023-08-10 19:26:09,996-03 INFO  [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0207: Starting subdeployment (runtime-name:
"services.war")
2023-08-10 19:26:09,999-03 INFO  [org.jboss.as.server.deployment] (MSC
service thread 1-2) WFLYSRV0207: Starting subdeployment (runtime-name:
"docs.war")
2023-08-10 19:26:10,000-03 INFO  [org.jboss.as.server.deployment] (MSC
service thread 1-5) WFLYSRV0207: Starting subdeployment (runtime-name:
"welcome.war")
2023-08-10 19:26:10,000-03 INFO  [org.jboss.as.server.deployment] (MSC
service thread 1-2) WFLYSRV0207: Starting subdeployment (runtime-name:
"enginesso.war")
2023-08-10 19:26:10,013-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/utils.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,014-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/uutils.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,014-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/sshd-core.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,014-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/sshd-common.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/jcl-over-slf4j.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/commons-beanutils.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/commons-logging.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/commons-compress.jar
in /var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/commons-lang.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/commons-codec.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/glance-client.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/glance-model.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/cinder-client.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,015-03 WARN  [org.jboss.as.server.deployment] (MSC
service thread 1-3) WFLYSRV0059: Class Path entry lib/cinder-model.jar in
/var/lib/ovirt-engine/jboss_runtime/deployments/engine.ear/bll.jar  does
not point to a valid jar for a Class-Path reference.
2023-08-10 19:26:10,016-03 WARN  

[ovirt-users] oVirt 4.5.5 snapshot - Migration failed due to an Error: Fatal error during migration

2023-08-10 Thread Jorge Visentini
Any tips about this error?

2023-08-10 18:24:57,544-03 INFO
 [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] Lock Acquired to object
'EngineLock:{exclusiveLocks='[29032e83-cfaf-4d30-bcc2-df72c5358552=VM]',
sharedLocks=''}'
2023-08-10 18:24:57,578-03 INFO
 [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] Running command:
MigrateVmToServerCommand internal: false. Entities affected :  ID:
29032e83-cfaf-4d30-bcc2-df72c5358552 Type: VMAction group MIGRATE_VM with
role type USER
2023-08-10 18:24:57,628-03 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] START, MigrateVDSCommand(
MigrateVDSCommandParameters:{hostId='282b69aa-8b74-4312-8cc0-9c20e01982b7',
vmId='29032e83-cfaf-4d30-bcc2-df72c5358552', srcHost='ksmmi1r01ovirt18',
dstVdsId='73c38b36-36da-4ffa-b17a-492fd7b093ae',
dstHost='ksmmi1r01ovirt19:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='false', consoleAddress='null',
maxBandwidth='3125', parallel='null', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]', dstQemu='10.250.156.19', cpusets='null',
numaNodesets='null'}), log id: 5bbc21d6
2023-08-10 18:24:57,628-03 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-4) [633be3a0-3afd-490c-b412-805d2b14e1c2] START,
MigrateBrokerVDSCommand(HostName = ksmmi1r01ovirt18,
MigrateVDSCommandParameters:{hostId='282b69aa-8b74-4312-8cc0-9c20e01982b7',
vmId='29032e83-cfaf-4d30-bcc2-df72c5358552', srcHost='ksmmi1r01ovirt18',
dstVdsId='73c38b36-36da-4ffa-b17a-492fd7b093ae',
dstHost='ksmmi1r01ovirt19:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', migrateEncrypted='false', consoleAddress='null',
maxBandwidth='3125', parallel='null', enableGuestEvents='true',
maxIncomingMigrations='2', maxOutgoingMigrations='2',
convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
action={name=setDowntime, params=[200]}}, {limit=3,
action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]', dstQemu='10.250.156.19', cpusets='null',
numaNodesets='null'}), log id: 14d92c9
2023-08-10 18:24:57,631-03 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-4) [633be3a0-3afd-490c-b412-805d2b14e1c2] FINISH,
MigrateBrokerVDSCommand, return: , log id: 14d92c9
2023-08-10 18:24:57,634-03 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-4)
[633be3a0-3afd-490c-b412-805d2b14e1c2] FINISH, MigrateVDSCommand, return:
MigratingFrom, log id: 5bbc21d6
2023-08-10 18:24:57,639-03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-4) [633be3a0-3afd-490c-b412-805d2b14e1c2] EVENT_ID:
VM_MIGRATION_START(62), Migration started (VM: ROUTER, Source:
ksmmi1r01ovirt18, Destination: ksmmi1r01ovirt19, User: admin@ovirt
@internalkeycloak-authz).
2023-08-10 18:24:57,641-03 INFO
 [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-13) [] VM
'29032e83-cfaf-4d30-bcc2-df72c5358552'(ROUTER) moved from 'MigratingFrom'
--> 'Up'
2023-08-10 18:24:57,641-03 INFO
 [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-13) [] Adding VM
'29032e83-cfaf-4d30-bcc2-df72c5358552'(ROUTER) to re-run list
2023-08-10 18:24:57,643-03 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(ForkJoinPool-1-worker-13) [] Rerun VM
'29032e83-cfaf-4d30-bcc2-df72c5358552'. Called from VDS 'ksmmi1r01ovirt18'
2023-08-10 18:24:57,679-03 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2194) [] START,
MigrateStatusVDSCommand(HostName = ksmmi1r01ovirt18,
MigrateStatusVDSCommandParameters:{hostId='282b69aa-8b74-4312-8cc0-9c20e01982b7',
vmId='29032e83-cfaf-4d30-bcc2-df72c5358552'}), log id: 445b81e0
2023-08-10 18:24:57,681-03 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-2194) [] FINISH,
MigrateStatusVDSCommand, return:
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusReturn@12cb7b1b, log
id: 445b81e0
2023-08-10 18:24:57,695-03 ERROR

[ovirt-users] Re: CPU support Xeon E5345

2023-08-10 Thread Thomas Hoberg
I believe oVirt draws the line at Nehalem, which contained important 
improvements to VM performance like extended page tables. Your Core 2 based 
Xeon is below that line and you'd have to change the code to make it work.

Ultimately oVirt is just using KVM, so if KVM works, oVirt can be made to work, 
too, and KVM still supports much older CPUs.

I've faced similar issues when launching oVirt on Atoms, which are also 
considered below that line, when in fact they support all Nehalem features. The 
problem was that oVirt sets the basic CPU above the line, when it creates the 
self-hosted virtualized management engine, which then fails to start because 
the CPU is below the line. By that time the initial setup VM has already done 
its work, so it's a bit of a nasty surprise and difficult to detect...

I got around the issiue by using a more modern CPU for the initial setup of my 
3 node HCI clusters and then downgraded the CPU baseline afterwards. But in 
theory you could just find the code that sets the CPU type and change it, there 
is a good chance it's hidden away in some Ansible or Python script.

Of course switching systems mid-flight comes with all kinds of other issues, 
but when you're bent on bending the basic requirements the developers have used 
for their code, you need to do the extra work.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYISK7RMLSCTKJCPNM7IZRZ34D2YU5NA/


[ovirt-users] CPU support Xeon E5345

2023-08-10 Thread Mikhail Po
Is it possible to install oVirt 4.3/4.4 on ProLiant BL460c G1 with Intel Xeon 
E5345 processor@2.33GHz ?
In case of a setup failure, [ERROR] is fatal:[localhost]: 
FAILED!=>{"Changed":false,"message":"The host was inoperable, deployment 
errors: code 156:Host host1.test.com disabled because the host CPU type in this 
case is not supported by the cluster compatibility version or is not supported 
at all, code 9000: Failed to check the power management configuration for the 
host host1.test.com ., correct accordingly and re-deploy."
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DFXXYP6XFMSV4LVYAI2I6WPN5A5JYMJE/