Have you reviewed the engine-setup logs?   There might be something interesting there.

Pat.

On 2023-12-20 05:07, Matthew J Black wrote:
Hi Guys & Gals,

So I've been researching this issue online for a couple of days now and I can't 
seem to find a solution - so I'm hoping you kind people here can help.

We're running oVirt (on Rocky 8, at the moment) with a iSCSI back-end provided 
by Ceph (Quincy, for the record). Everything from the Ceph-end looks AOK.

However, none of the oVirt Hosts (and therefore the VMs) can connect to the Ceph iSCSI 
RBD Images (oVirt Storage Domains), and only one or two of the Hosts can log into the 
Ceph iSCSI Target - the others throw a "Failed to setup iSCSI subsystem" error.

All of the existing iSCSI Storage Domains are in Maintenance mode, and when I try to do 
*anything* to them the logs spit out a "Storage domain does not exist:" message.

I also cannot create a new iSCSI Storage Domain for a new Ceph pool - again, 
oVirt simply won't/can't see it, even though it clearly visable in the iSCSI 
section of the Ceph Dashboard (and in gwcli on the Ceph Nodes).

All of this started happening after I ran an update of oVirt - including an "engine 
setup" with a full engine-vacuum. Nothing has changed on the Ceph-end.

So I'm looking for help on 2 issues, which or may not be related:

1) Is there a way to "force" oVirt Hosts to log into iSCSI targets? This will 
mean all of the oVirt Hosts will be connected to all of the Ceph iSCSI Gateways.

2) I'm thinking that *somehow* the existing "Storage Domains" registered in oVirt have 
become orphaned, so they need to be "cleaned up" - is there a cli way to do this (I don't 
mind digging into SQL as I'm an old SQL engineer/admin from way back). Thoughts on how to do this - 
and if it should be done at all?

Or should I simple detach the Storage Domain images from the relevant VMs and 
destroy the Storage Domains, recreate them (once I can get the oVirt Hosts to 
log back into the Ceph iSCSI Gateways), and then reattach the relevant images 
to the relevant VMs? I mean, after all, the data is good and available on the 
Ceph SAN, so there is only a little risk (as far as I can see) - but there are 
a *hell* of a lot of VMs to do to do this  :-)

Anyway, any and all help, suggestions, gotchas, etc, etc, etc, are welcome.  :-)

Cheers

Dulux-Oz
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MG7DCZZOWX7M6UAHXZ6L3V2MYF67RIF5/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7WTB6VSVA5OKX37M7DL6AA4EXLLISYS/

Reply via email to