Carsten Aulbert wrote:
Hi,

I'll start with a one-off question here, so please cc me on the reply. We are running a largish cluster and are currently buying GPGPU systems (Tesla and soon Fermi based). We will have at least 2 possibly 4 of these cards per box and have the problem that some codes need different CUDA kernel drivers to run. As these boxes have 4 CPU cores, 12 GB of memory and CPU-VT support we thought that this might be solvable by creating (para-) virtualized guests on the boxes and passing one GPGPU device into a guest at a time. In there we then can run any kernel/driver combo necessary.

But since my current virtualization experience only stretches to OpenVZ and VirtualBox (tinkering with Xen a couple of years back), I don't know if KVM is the right approach here. We need something which we can automatically set-up via CLI, i.e. starting and stopping the guests need to be fully automatic, we don't need a graphical environment within the guests, just plain text is good enough.

What do you think, is looking at KVM the right choice for this? Can we pass a device directly into a guest?
KVM already supports device assignment (e.g. NIC) to guest, but graphics card assignment is not supported yet.

Federico, you said you were porting xen graphics assignment patches to kvm, what's your progress?

Regards,
Weidong

Cheers

Carsten


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to