[ovirt-users] Re: Re-creating ISO Storage Domain after Manager Rebuild

2020-08-14 Thread Strahil Nikolov via Users
When oVirt creates a storage domain , it assigns an unique id (in the engine 
DB) and a single directory, named as the uuid , is created there.

As you lost the dir, your storage domain is gone but as it's an iso domain - it 
shouldn't be critical.

I see 2 approaches on fixing the broken storage domain:
- log to engine, switch to postgresql and start searching in the DB for the 
uuid.
Then create the share, create the dir inside and then oVirt should be happy.
Yet it will start complaining for the ISOs that are also missing (and also 
unique uuids)
- Another approach is to try to remove the ISO domain. Have you tried to set 
the domain in maintenance first and then to remove it ?

Check your VMs , if any has an ISO attached to it.

Also,  you can upload  an ISO to a data domain and use that to install VMs.

Best  Regards,
Strahil Nikolov

На 13 август 2020 г. 22:21:10 GMT+03:00, "bob.franzke--- via Users" 
 написа:
>Hi All,
>
>Late last year I had a disk issue with my Ovirt Manager Server that
>required I rebuild it and restored from backups. While this worked
>mostly, one thing that did not get put back correctly was the physical
>storage location of an ISO domain on the manager server. I believe this
>was previously set up as an NFS share being hosted on the Manager
>Server itself. The physical disk path and filesystem were not
>re-created on the manager server's local disk. So after the restore,
>the ISO domain shows up in the Ovirt Admin portal but shows the as
>down, and inactive. If I go into the domain, the 'Manage Domain' and
>'Remove' buttons are not available. The only option I have for this is
>'Destroy'. I right now have no ability to do things like boot a VM from
>a CD and am assuming its because the ISO domain that the images for
>this would be stored on is not available. How can I recreate this ISO
>domains to be able to upload ISO images that VM machines can boot
>from.The original path on the manager server was /gluster: 
>
>sdc 8:32   0 278.9G
> 0 disk
>├─sdc1  8:33   0 1G
> 0 part  /boot
>└─sdc2  8:34   0 277.9G
> 0 part
>├─centos_mydesktop-root 253:0050G 
>0 lvm   /
>├─centos_mydesktop-swap 253:10  23.6G 
>0 lvm   [SWAP]
>├─centos_mydesktop-home 253:2025G 
>0 lvm   /home
>└─centos_mydesktop-gluster  253:30 179.3G 
>0 lvm   /gluster
>
>The current layout is now:
>
>└─sda3 
>  8:30 277.7G  0 part 
>├─centos-root  
>  253:0050G  0 lvm  /
>├─centos-swap  
>  253:10 4G  0 lvm  [SWAP]
>└─centos-home  
>  253:20 223.7G  0 lvm  /home
>
>Can I just create a new filesystem here, destroy the old ISO domain, 
>and add a new one using the new path? I am a total Ovirt Noob so would
>appreciate any help here. Thanks.
>
>Bob
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/N65ZFGT5C3CDL5R5226G5YDZOT5LAXMN/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEQMSOODW3SZ6NLS4PTRXYC7U2CY6ECN/


[ovirt-users] Migration not working

2020-08-14 Thread Juan Pablo Lorier

Hi,

I'm having issues with migration to some hosts. I have a 4 node cluster 
and I tried updating to see if it fixes but it's still failing. The 
reason is nos clear in the engine log.



2020-08-14 09:45:32,322-03 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-429) [364e] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT
ED(67), Migration initiated by system (VM: medialist2-videoteca, Source: 
virt2.tnu.com.uy, Destination: virt3.tnu.com.uy, Reason: ).
2020-08-14 09:45:32,336-03 INFO 
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429) 
[535a8bdb] Lock Acquired to object 'EngineLock:{exclusiveLocks='[aac7468

5-c969-4761-923f-043400176edf=VM]', sharedLocks=''}'
2020-08-14 09:45:32,355-03 INFO 
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-429) 
[535a8bdb] Running command: MigrateVmToServerCommand internal: true. Ent
ities affected :  ID: aac74685-c969-4761-923f-043400176edf Type: 
VMAction group MIGRATE_VM with role type USER
2020-08-14 09:45:32,383-03 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429) 
[535a8bdb] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId=
'634f3f64-8945-470c-b31c-b8d4c73109e6', 
vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy', 
dstVdsId='f9e441a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4.
tnu.com.uy:54321', migrationMethod='ONLINE', tunnelMigration='false', 
migrationDowntime='0', autoConverge='true', migrateCompressed='false', 
consoleAddress='null', maxBandwidth=
'125', enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', 
convergenceSchedule='[init=[{name=setDowntime, params=[100]}], 
stalling=[{limit=1, action=
{name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, 
params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, 
{limit=4, action={name=setDowntime, pa
rams=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, 
{limit=-1, action={name=abort, params=[]}}]]', 
dstQemu='172.16.100.45'}), log id: 1db80717
2020-08-14 09:45:32,385-03 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(default task-429) [535a8bdb] START, MigrateBrokerVDSCommand(HostName = virt
2.tnu.com.uy, 
MigrateVDSCommandParameters:{hostId='634f3f64-8945-470c-b31c-b8d4c73109e6', 
vmId='aac74685-c969-4761-923f-043400176edf', srcHost='virt2.tnu.com.uy', 
dstVdsId='f9e4
41a0-a4e6-4548-8c6f-83a958120f02', dstHost='virt4.tnu.com.uy:54321', 
migrationMethod='ONLINE', tunnelMigration='false', 
migrationDowntime='0', autoConverge='true', migrateCompre
ssed='false', consoleAddress='null', maxBandwidth='125', 
enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDow
ntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, 
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, 
{limit=3, action={name=setDowntime, para
ms=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, 
{limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, 
action={name=abort, params=[]}}]]', dstQemu='172.1

6.100.45'}), log id: 4085b503
2020-08-14 09:45:32,433-03 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(default task-429) [535a8bdb] FINISH, MigrateBrokerVDSCommand, return: , log

 id: 4085b503
2020-08-14 09:45:32,437-03 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-429) 
[535a8bdb] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 1db8

0717
2020-08-14 09:45:32,446-03 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-429) [535a8bdb] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIAT
ED(67), Migration initiated by system (VM: reverse_proxy, Source: 
virt2.tnu.com.uy, Destination: virt4.tnu.com.uy, Reason: ).
2020-08-14 09:45:32,462-03 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-429) [535a8bdb] EVENT_ID: 
USER_VDS_MAINTENANCE_WITHOUT_REASON(620), Host virt2.tnu.com.uy was 
switched to Maintenance mode by jplor...@tnu.com.uy.
2020-08-14 09:45:32,681-03 INFO 
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-14) 
[7667d149-886e-406b-9cd6-9b7472fa9944] Command 'MaintenanceNumberOfVdss' 
id: 'db97a747-115b-4f01-aa3d-6277080e6a44' child commands '[]' 
executions were completed, status 'SUCCEEDED'
2020-08-14 09:45:33,024-03 INFO 
[org.ovirt.engine.core.vdsbroker.VdsManager] 
(EE-ManagedThreadFactory-engineScheduled-Thread-62) [] Received first 
domain report for host virt2.tnu.com.uy
2020-08-14 09:45:33,686-03 INFO 
[org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-88) 
[7667d149-886e-406b-9cd6-9b7472fa9944] Ending command 
'org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand' successfully.
2020-08-14 09:45:35,137-03 INFO 

[ovirt-users] Re: Re-creating ISO Storage Domain after Manager Rebuild

2020-08-14 Thread Bob Franzke via Users
OK thanks for the reply. As you can perhaps tell I am a complete noob with 
Ovirt. The fact its working now at all is a complete miracle.

>>> I see 2 approaches on fixing the broken storage domain:
>>> - log to engine, switch to postgresql and start searching in the DB for the 
>>> uuid.

I am not sure what is meant here by 'switch to postgresql'. Can you clarify?

>>> Then create the share, create the dir inside and then oVirt should be happy.
>>> Yet it will start complaining for the ISOs that are also missing (and also 
>>> unique uuids)
>>> - Another approach is to try to remove the ISO domain. Have you tried to 
>>> set the domain in maintenance first and then to remove it ?

I am not sure how this is done. I don’t seem to have any options in the Ovirt 
admin portal to do anything other than destroy the domain. I don’t see a 
maintenance mode option in the Admin UI.

 Check your VMs , if any has an ISO attached to it.

We have templates that were created by someone else. All the VMs seem to be 
built off of those templates, which I assume were built initially  by booting 
off DVD ISOs and installing the OS's for the template VMs. Maybe I am missing 
how VMs get OS's on them in Ovirt. I think the only thing the iso domain did 
was allow us to use the virtio drivers for booting off of ISO images. I am not 
sure if that’s accurate or makes sense here. I understood that when the ISO 
domain is created that those VirtIO drivers for CD booting are created as part 
of the ISO domain creation. This would then give you an option of attaching an 
uploaded ISO as a CD-ROM which you can use to boot a VM off of. I have a CentOS 
VM which has a corrupted XFS filesystem that I would like to boot off of a 
Rescue CD to repair the FS. The only way I know this can be done is by having 
the rescue disk ISO uploaded to an ISO domain. Perhaps there is a different way 
to do that. In any case, it would be good to get the ISO domain working. Right 
now when I try and attach a CD to a VM for booting, there are no options to 
choose and I am assuming its because the VirtIO drivers needed to do such a 
thing are missing because the ISO domain is broken. Sound right?

 Also,  you can upload  an ISO to a data domain and use that to install VMs.

Can this be used to boot off a CD in a situation where I need to boot off a CD 
ISO like you would need to do to boot off a CentOS rescue CD?

Thanks very much for the help. This was a system I inherited from someone else 
and I have really no idea what I am doing with it (I am not a virtualization 
expert and have a Network Engineering background). 

-Original Message-
From: hunter86...@yahoo.com (Strahil Nikolov)  
Sent: Friday, August 14, 2020 4:25 AM
To: bob.fran...@mdaemon.com; users@ovirt.org
Subject: Re: [ovirt-users] Re-creating ISO Storage Domain after Manager Rebuild

When oVirt creates a storage domain , it assigns an unique id (in the engine 
DB) and a single directory, named as the uuid , is created there.

As you lost the dir, your storage domain is gone but as it's an iso domain - it 
shouldn't be critical.

I see 2 approaches on fixing the broken storage domain:
- log to engine, switch to postgresql and start searching in the DB for the 
uuid.
Then create the share, create the dir inside and then oVirt should be happy.
Yet it will start complaining for the ISOs that are also missing (and also 
unique uuids)
- Another approach is to try to remove the ISO domain. Have you tried to set 
the domain in maintenance first and then to remove it ?

Check your VMs , if any has an ISO attached to it.

Also,  you can upload  an ISO to a data domain and use that to install VMs.

Best  Regards,
Strahil Nikolov

На 13 август 2020 г. 22:21:10 GMT+03:00, "bob.franzke--- via Users" 
 написа:
>Hi All,
>
>Late last year I had a disk issue with my Ovirt Manager Server that 
>required I rebuild it and restored from backups. While this worked 
>mostly, one thing that did not get put back correctly was the physical 
>storage location of an ISO domain on the manager server. I believe this 
>was previously set up as an NFS share being hosted on the Manager 
>Server itself. The physical disk path and filesystem were not 
>re-created on the manager server's local disk. So after the restore, 
>the ISO domain shows up in the Ovirt Admin portal but shows the as 
>down, and inactive. If I go into the domain, the 'Manage Domain' and 
>'Remove' buttons are not available. The only option I have for this is 
>'Destroy'. I right now have no ability to do things like boot a VM from 
>a CD and am assuming its because the ISO domain that the images for 
>this would be stored on is not available. How can I recreate this ISO 
>domains to be able to upload ISO images that VM machines can boot 
>from.The original path on the manager server was /gluster:
>
>sdc 8:32   0 278.9G
> 0 disk
>├─sdc1  8:33