Pierre-Luc,

Thanks for that. So for my own clarification, you are saying that for you, on XenServer Enterprise + drivers + licensing the vGPU feature "just works" out of the box using the standard Cloudstack feature (the same that supported NVidia Grid k1/k2 all those years ago) which we can find in the UI/API when definning compute offerings, correct?

Regards

On 2024-03-11 20:06, Pierre-Luc Dion wrote:
The way we've been delivering GPU offering with Cloudstack is by using host
tags.
So each host with a specific GPU has the host tags, example: a16,
and the compute offering with the GPU definition also use the hosttag a16.

We've been using this with XenServer Enterprise and so far , no issue for
GPU and vGPU support.


Nux: vGPU and GPU are more attractive than ever with AI inferencing
workload, GPU for AI and desktop, vGPU for desktop mostly.


On Tue, Feb 27, 2024 at 7:00 AM Nux <n...@li.nux.ro> wrote:

This sounds foreign to me, afaik GPU support is limited to certain (old)
NVIDIA Grid cards on Xenserver Enterprise.
Modern GPUs are not supported out of the box, although of course many
here do use them by means of custom xml/groovy scripts.

How you detect them, no idea, let's see how other users do it, if they
care to share.

On 2024-02-26 18:00, Douglas Oliveira wrote:
> Hello,
> How does the GPU discovery process work on the hypervisor with SC,
> something similar to what Opennebula does? (through lspci)
> I currently have a service offering created via API for an Nvidia A16
> GPU,
> which does not work because it is informed that there are no hosts
> available to serve the resource. So I'm unsure whether what doesn't
> work is
> the service offering or the non-detection of the GPU on the host.
>
> Regards

Reply via email to