Re: [PVE-User] vGPU scheduling

2023-05-31 Thread Eneko Lacunza via pve-user
--- Begin Message --- Hi, El 25/5/23 a las 10:03, Eneko Lacunza escribió: As Ubuntu 22.04 is in it and the Proxmox kernel is derived from it, the technical effort may not be so large. Yes, their current Linux KVM package (15.2) should work with our 5.15 kernel, it's what i use here

Re: [PVE-User] vGPU scheduling

2023-05-25 Thread Thomas Lamprecht
Am 25/05/2023 um 09:43 schrieb Dominik Csapak: >> >> Do you think it'll be merged for proxmox 8 ? > > i don't know, but this also depends on the capacity of my colleagues to  > review  making it easy to digest and adding (good) tests will surely help to accelerate this ;-P But, you're

Re: [PVE-User] vGPU scheduling

2023-05-25 Thread Eneko Lacunza via pve-user
--- Begin Message --- Hi, El 25/5/23 a las 9:53, Dominik Csapak escribió: El 25/5/23 a las 9:24, Dominik Csapak escribió:     2.12.0 (qemu-kvm-2.12.0-64.el8.2.27782638)   * Microsoft Windows Server with Hyper-V 2019 Datacenter edition   * Red Hat Enterprise Linux Kernel-based Virtual Machine

Re: [PVE-User] vGPU scheduling

2023-05-25 Thread Dominik Csapak
On 5/25/23 09:36, Eneko Lacunza via pve-user wrote: Hi Dominik, El 25/5/23 a las 9:24, Dominik Csapak escribió:     2.12.0 (qemu-kvm-2.12.0-64.el8.2.27782638)   * Microsoft Windows Server with Hyper-V 2019 Datacenter edition   * Red Hat Enterprise Linux Kernel-based Virtual Machine (KVM)

Re: [PVE-User] vGPU scheduling

2023-05-25 Thread Dominik Csapak
On 5/25/23 09:32, DERUMIER, Alexandre wrote: Hi Dominik, any news about your patches "add cluster-wide hardware device mapping" i'm currently on a new version of this first part was my recent series for the section config/api array support i think i can send the new version for the backend

Re: [PVE-User] vGPU scheduling

2023-05-25 Thread DERUMIER, Alexandre
Hi Dominik, any news about your patches "add cluster-wide hardware device mapping" ? Do you think it'll be merged for proxmox 8 ? I think it could help for this usecase. Le mercredi 24 mai 2023 à 15:47 +0200, Dominik Csapak a écrit : > On 5/24/23 15:03, Eneko Lacunza via pve-user wrote: > >

Re: [PVE-User] vGPU scheduling

2023-05-25 Thread Eneko Lacunza via pve-user
--- Begin Message --- Hi Dominik, El 25/5/23 a las 9:24, Dominik Csapak escribió:     2.12.0 (qemu-kvm-2.12.0-64.el8.2.27782638)   * Microsoft Windows Server with Hyper-V 2019 Datacenter edition   * Red Hat Enterprise Linux Kernel-based Virtual Machine (KVM) 9.0 and 9.1   * Red Hat

Re: [PVE-User] vGPU scheduling

2023-05-25 Thread Dominik Csapak
On 5/24/23 18:31, Stephan Leemburg via pve-user wrote: Hi @Proxmox, Hi, I have another question about - specific - NVIDIA vGPU usage and licensing. Currently, the following hypervisors are supported for running vGPU in combination with NLS/DLS NVIDIA licensing servers:  * Citrix

Re: [PVE-User] vGPU scheduling

2023-05-24 Thread Stephan Leemburg via pve-user
--- Begin Message --- Hi @Proxmox, I have another question about - specific - NVIDIA vGPU usage and licensing. Currently, the following hypervisors are supported for running vGPU in combination with NLS/DLS NVIDIA licensing servers: * Citrix Hypervisor 8.2 * Linux Kernel-based Virtual

Re: [PVE-User] vGPU scheduling

2023-05-24 Thread Eneko Lacunza via pve-user
--- Begin Message --- Hi, El 24/5/23 a las 15:47, Dominik Csapak escribió: We're looking to move a PoC in a customer to full-scale production. Proxmox/Ceph cluster will be for VDI, and some VMs will use vGPU. I'd like to know if vGPU status is being exposed right now (as of 7.4) for each

Re: [PVE-User] vGPU scheduling

2023-05-24 Thread Dominik Csapak
On 5/24/23 15:03, Eneko Lacunza via pve-user wrote: Hi, Hi, We're looking to move a PoC in a customer to full-scale production. Proxmox/Ceph cluster will be for VDI, and some VMs will use vGPU. I'd like to know if vGPU status is being exposed right now (as of 7.4) for each node through

[PVE-User] vGPU scheduling

2023-05-24 Thread Eneko Lacunza via pve-user
--- Begin Message --- Hi, We're looking to move a PoC in a customer to full-scale production. Proxmox/Ceph cluster will be for VDI, and some VMs will use vGPU. I'd like to know if vGPU status is being exposed right now (as of 7.4) for each node through API, as it is done for RAM/CPU, and if