Hi Dan,
I never also understood iSCSI Multipath in oVirt and for me the implementation 
is broken.

In the past there’s a lot of messages regarding the issue but and the end of 
the day if you have separate networks, like what MPIO should be, it just didn’t 
work with the Multipath option, that’s because all paths must be reached within 
the same network, which for MPIO isn’t the standard and also not recommended 
due to multihoming issues.

Just create two connections and leave Multipath on oVirt disabled. That will 
work if your initiator have multiple addresses.

You can take a look at the issue opened years ago:
* https://users.ovirt.narkive.com/U8T6krlh/ovirt-iscsi-multipath-issues
* https://bugzilla.redhat.com/show_bug.cgi?id=1474904

Regards,

Sent from my iPhone

On 13 Nov 2025, at 19:10, Dan O'Brien <[email protected]> wrote:

TL;DR: I’d like to better understand iSCSI “connections” versus oVirt iSCSI 
multipath bonds. Also how oVirt caches iSCSI connection information and 
credentials.

I’m standing up a new cluster that will replace an aging cluster running oVirt 
4.3 on CentOS 7 hosts and GlusteFS storage. I’ve cleared most of the milestones 
I need to proceed with migrating the VMs, but I’m struggling a bit with the 
interaction of oVirt’s networking and the iSCSI storage for the new cluster.

The new cluster is Rocky 9.6 hosts, and oVirt installed from the master 
snapshot on 10/31/2025. My storage for the new cluster is using the Ceph iSCSI 
gateway (2 hosts, 1 iSCSI target, 1 IP from each assigned as portal). I have a 
dedicated VLAN for iSCSI traffic (VLAN 42: 192.168.42.0/24). I removed one of 
the hosts from the old cluster (Dell PowerEdge R620) and have used it to seed 
the new cluster with the self-hosted engine.

I have 2 Ceph hosts running the iSCSI gateway on VLAN 42 assigned IPs .85 and 
.88. During the deployment of the hosted engine, I’m prompted for the storage 
and I used 192.168.42.88 as the iSCSI portal and logged in to the target and 
the deployment succeeded. I had a LUN on the iSCSI target for the hosted engine 
and the associated storage domain.

For the deployment of the hosted engine, I had one NIC set up and assigned a 
static IP on our internal network (access port, no VLAN tagging). The ovirtmgmt 
logical network gets assigned to this and the hosted engine coexists happily 
with the host sharing the NIC in the same subnet. Before deployment, the host 
was set up with a bond of 2x10G ethernet ports in a LAG and a VLAN interface 
for VLAN 42.

If I look at the host Network Interfaces after deployment, I can see the 
existing bond (bond0). The VLAN interface I created before deployment 
(bond0.42) is listed under Logical Networks, but is flagged as Unmanaged.

Once in the UI, I created 2 additional Storage Domains (Data): one for ISOs and 
one for VMs using additional LUNs (backed by Ceph RBD images). The UI seems to 
have re-used the iSCSI connection set up for the hosted engine. I could see the 
LUNs I created for and was able to assign the LUNs to the storage domains 
(1x512Gb RBD for ISOs and 2x512Gb RBD for the storage domain).

All of this is great, BUT I’m only using one of the iSCSI gateways.

One option I have is to set up iSCSI multipathing in the datacenter. This 
doesn’t seem to be possible for two reasons: My storage network is being 
accessed by the unmanaged logical network, AND I need an ADDITIONAL VLAN using 
a different subnet to set up another logical network for the iSCSI multipath 
bond. This is a bit of a headache, but I think it’s doable after I add another 
host, and migrate the hosted engine so I can remove the unmanaged VLAN 42 from 
the bond and replace it with a managed logical network tagged with VLAN 42 and 
assigned an appropriate IP address.

HOWEVER – I noticed on the storage domain, there’s a CONNECTIONS button. This 
shows a panel with an “Attached” connection to the iSCSI target and the portal 
address I specified during the hosted engine setup. After putting the ISOs 
storage domain in maintenance, I could add another connection with the second 
portal of the target (I’m not sure this actually did anything). When I tried to 
do this for the storage domain, I got an error that a connection already 
existed. Attempting discovery entering the IP address of one of the portals and 
the CHAP username/password doesn’t seem to work; I keep getting authentication 
failures even though I’m pretty sure I’m using the right credentials.

So my questions:
- What’s the difference between multiple “paths” and the multipathing set up in 
oVirt?
- What information about the iSCSI connections is oVirt storing and how is it 
using it when you set up later storage domains?
- Are multiple “paths” in the storage domain equivalent to the multipath bonds? 
I’d like to use the existing portals in my iSCSI VLAN and not have to set up 
another VLAN (which is not terrible, just inconvenient).
- Is there a way of manipulating the iSCSI connection information 
after-the-fact? There's a warning in the docs that adding paths isn't 
supported, but it seems to only refer to the UI. Can it be done with the REST 
API or Ansible?

Sorry for the long read. I've been using oVirt since I started at my present 
job and it's been solid. I appreciate the work you're doing keeping this 
project going. I'm looking forward to having a cluster with the latest software 
and retiring GlusterFS.
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/NDNNAYM25YQLODHJRAQUYVU5EPPPAH7T/
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/3TPJMQL24DODSVXUJG7LEERF2VXUBOKN/

Reply via email to