Copilot commented on code in PR #526:
URL: 
https://github.com/apache/cloudstack-documentation/pull/526#discussion_r2189471505


##########
source/adminguide/virtual_machines.rst:
##########
@@ -1593,39 +1593,54 @@ CloudStack meet the intensive graphical processing 
requirement by means of the
 high computation power of GPU/vGPU, and CloudStack users can run multimedia
 rich applications, such as Auto-CAD, that they otherwise enjoy at their desk on
 a virtualized environment.
-CloudStack leverages the XenServer support for NVIDIA GRID Kepler 1 and 2 
series
-to run GPU/vGPU enabled Instances. NVIDIA GRID cards allows sharing a single 
GPU cards
-among multiple Instances by creating vGPUs for each Instance. With vGPU 
technology, the
-graphics commands from each Instance are passed directly to the underlying 
dedicated
-GPU, without the intervention of the hypervisor. This allows the GPU hardware
-to be time-sliced and shared across multiple Instances. XenServer hosts use 
the GPU
-cards in following ways:
-
-**GPU passthrough**: GPU passthrough represents a physical GPU which can be
+
+For KVM, CloudStack leverages libvirt's PCI passthrough feature to assign a
+physical GPU to a guest Instance. For vGPU profiles, depending on the vGPU 
type,
+CloudStack uses mediated devices or Virtual Functions(VF) to assign a virtual
+GPU to a guest Instance. It's the responsibility of the operator to ensure that
+GPU devices are in correct state and are available for use on the host. If the
+operator wants to use vGPU profiles, they need to ensure that the vGPU type is
+supported by the host and has been created on the host.
+
+For XenServer, CloudStack leverages the XenServer support for NVIDIA GRID
+Kepler 1 and 2 series to run GPU/vGPU enabled Instances.
+
+Some NVIDIA cards allows sharing a single GPU card among multiple Instances by

Review Comment:
   Grammatical error: 'cards allows' should be 'cards allow'.
   ```suggestion
   Some NVIDIA cards allow sharing a single GPU card among multiple Instances by
   ```



##########
source/adminguide/virtual_machines.rst:
##########
@@ -1593,39 +1593,54 @@ CloudStack meet the intensive graphical processing 
requirement by means of the
 high computation power of GPU/vGPU, and CloudStack users can run multimedia
 rich applications, such as Auto-CAD, that they otherwise enjoy at their desk on
 a virtualized environment.
-CloudStack leverages the XenServer support for NVIDIA GRID Kepler 1 and 2 
series
-to run GPU/vGPU enabled Instances. NVIDIA GRID cards allows sharing a single 
GPU cards
-among multiple Instances by creating vGPUs for each Instance. With vGPU 
technology, the
-graphics commands from each Instance are passed directly to the underlying 
dedicated
-GPU, without the intervention of the hypervisor. This allows the GPU hardware
-to be time-sliced and shared across multiple Instances. XenServer hosts use 
the GPU
-cards in following ways:
-
-**GPU passthrough**: GPU passthrough represents a physical GPU which can be
+
+For KVM, CloudStack leverages libvirt's PCI passthrough feature to assign a
+physical GPU to a guest Instance. For vGPU profiles, depending on the vGPU 
type,
+CloudStack uses mediated devices or Virtual Functions(VF) to assign a virtual
+GPU to a guest Instance. It's the responsibility of the operator to ensure that
+GPU devices are in correct state and are available for use on the host. If the
+operator wants to use vGPU profiles, they need to ensure that the vGPU type is
+supported by the host and has been created on the host.
+
+For XenServer, CloudStack leverages the XenServer support for NVIDIA GRID
+Kepler 1 and 2 series to run GPU/vGPU enabled Instances.
+
+Some NVIDIA cards allows sharing a single GPU card among multiple Instances by
+creating vGPUs for each Instance. With vGPU technology, the graphics commands
+from each Instance are passed directly to the underlying dedicated GPU, without
+the intervention of the hypervisor. This allows the GPU hardware to be
+time-sliced and shared across multiple Instances. The GPU cards are used in the
+following ways:
+
+**passthrough**: GPU passthrough represents a physical GPU which can be
 directly assigned to an Instance. GPU passthrough can be used on a hypervisor 
alongside
 GRID vGPU, with some restrictions: A GRID physical GPU can either host GRID
 vGPUs or be used as passthrough, but not both at the same time.
 
-**GRID vGPU**: GRID vGPU enables multiple Instances to share a single physical 
GPU.
+**vGPU**: vGPU enables multiple Instances to share a single physical GPU.
 The Instances run an NVIDIA driver stack and get direct access to the GPU. GRID
 physical GPUs are capable of supporting multiple virtual GPU devices (vGPUs)
-that can be assigned directly to guest Instances. Guest Instances use GRID 
virtual GPUs in
+that can be assigned directly to guest Instances. Guest Instances use vGPUs in
 the same manner as a physical GPU that has been passed through by the
 hypervisor: an NVIDIA driver loaded in the guest Instance provides direct 
access to
 the GPU for performance-critical fast paths, and a paravirtualized interface to
-the GRID Virtual GPU Manager, which is used for nonperformant management
-operations. NVIDIA GRID Virtual GPU Manager for XenServer runs in dom0.
+the NVIDIA vGPU Manager, which is used for nonperformant management
+operations. NVIDIA vGPU Manager for XenServer runs in dom0.
+
 CloudStack provides you with the following capabilities:
 
-- Adding XenServer hosts with GPU/vGPU capability provisioned by the 
administrator.
+- Adding hosts with GPU/vGPU capability provisioned by the administrator.
+  (Supports only XenServer & KVM)
 
-- Creating a Compute Offering with GPU/vGPU capability.
+- Creating a Compute Offering with GPU/vGPU capability. For KVM, it possible to

Review Comment:
   Missing 'is': change 'it possible' to 'it is possible'.
   ```suggestion
   - Creating a Compute Offering with GPU/vGPU capability. For KVM, it is 
possible to
   ```



##########
source/adminguide/hosts.rst:
##########
@@ -223,6 +223,38 @@ Following hypervisor-specific documentations can be 
referred for different maxim
    Guest Instance limit check is not done while deploying an Instance on a KVM 
hypervisor host.
 
 
+.. _discovering-gpu-devices-on-hosts:
+
+Discovering GPU Devices on Hosts
+--------------------------------
+
+For KVM, the user needs to ensure that IOMMU is enabled and the necessary
+drivers are installed. And if vGPU is to be used, the user needs to ensure that
+the vGPU type is supported by the host and has been created on the host. The

Review Comment:
   [nitpick] Redundant 'on the host'; you could drop the second occurrence for 
clarity.
   ```suggestion
   the vGPU type is supported by the host and has been created. The
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to