On Thu, Nov 2, 2023 at 4:40 PM Yuriy <ykhokh...@gmail.com> wrote:

> Hi Nir!
>
> This is Yuriy.
> We agreed to continue the subject via email.
>

So the options are:

1. Using Managed Block Storage (cinderlib) with a driver that supports
NVMe/TCP.

Lastest oVirt has the needed changes to configure this. Benny and I tested
with Lightbits[1] driver in a
virtualized environment. This is basically a POC that may work for you or
not, or require more work
that you will have to do yourself since not much development is happening
now in oVirt.

2. Using the devices via multipath

Legacy storage domains are based on multipath. It may be possible to use
multipath on top of
NVMe devices, and in this case they look like a normal LUN so you can
create a storage domain
from such devices.

oVirt will not handle connections for you, and all the devices must be
connected to all nodes at the
same time, just like FC/iSCSI LUNs. You will likely not get the performance
benefit of NVMe/TCP.

3. Using host devices

If what you need is using some devices (which happen to be connected via be
NVMe/TCP), maybe you
can attach them to a VM directly (using host devices). This gives the best
possible performance but no
features (snapshots, backup, live migration, live storage migration, etc.)

[1] https://www.lightbitslabs.com/

Nir
_______________________________________________
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/C7G4T7AZ2QWZXEU4BAA2WTG3WGQPLQBQ/

Reply via email to