jtuyls commented on a change in pull request #6343: URL: https://github.com/apache/incubator-tvm/pull/6343#discussion_r479614422
########## File path: docs/deploy/vitis_ai.rst ########## @@ -0,0 +1,617 @@ +Vitis-AI Integration +==================== + +`Vitis-AI <https://github.com/Xilinx/Vitis-AI>`__ is Xilinx's +development stack for hardware-accelerated AI inference on Xilinx +platforms, including both edge devices and Alveo cards. It consists of +optimized IP, tools, libraries, models, and example designs. It is +designed with high efficiency and ease of use in mind, unleashing the +full potential of AI acceleration on Xilinx FPGA and ACAP. + +The current Vitis-AI Byoc flow inside TVM enables acceleration of Neural +Network model inference on edge and cloud. The identifiers for the +supported edge and cloud Deep Learning Processor Units (DPU's) are +DPUCZDX8G respectively DPUCADX8G. DPUCZDX8G and DPUCADX8G are hardware +accelerators for convolutional neural networks (CNN's) on top of the +Xilinx `Zynq Ultrascale+ +MPSoc <https://www.xilinx.com/products/silicon-devices/soc/zynq-ultrascale-mpsoc.html>`__ +respectively +`Alveo <https://www.xilinx.com/products/boards-and-kits/alveo.html>`__ +(U200/U250) platforms. For more information about the DPU identifiers +see the section on `DPU naming information <#dpu-naming-information>`__. + +On this page you will find information on how to +`build <#build-instructions>`__ TVM with Vitis-AI and on how to `get +started <#getting-started>`__ with an example. + +DPU naming information +---------------------- + ++---------------------------------+-----------------+-------------------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------+--------------------------------------------------------------------------+ +| DPU | Application | HW Platform | Quantization Method | Quantization Bitwidth | Design Target | ++=================================+=================+=========================================================================+============================================================+===================================================+==========================================================================+ +| Deep Learning Processing Unit | C: CNN R: RNN | AD: Alveo DDR AH: Alveo HBM VD: Versal DDR with AIE & PL ZD: Zynq DDR | X: DECENT I: Integer threshold F: Float threshold R: RNN | 4: 4-bit 8: 8-bit 16: 16-bit M: Mixed Precision | G: General purpose H: High throughput L: Low latency C: Cost optimized | ++---------------------------------+-----------------+-------------------------------------------------------------------------+------------------------------------------------------------+---------------------------------------------------+--------------------------------------------------------------------------+ + +Build instructions +------------------ + +This section lists the instructions for building TVM with Vitis-AI for +both `cloud <#cloud-dpucadx8g>`__ and `edge <#edge-dpuczdx8g>`__. + +Cloud (DPUCADX8G) +~~~~~~~~~~~~~~~~~ + +For Vitis-AI acceleration in the cloud TVM has to be built on top of the +Xilinx Alveo platform. + +System requirements +^^^^^^^^^^^^^^^^^^^ + +The following table lists system requirements for running docker +containers as well as Alveo cards. + ++-----------------------------------------------------+----------------------------------------------------------+ +| **Component** | **Requirement** | ++=====================================================+==========================================================+ +| Motherboard | PCI Express 3.0-compliant with one dual-width x16 slot | ++-----------------------------------------------------+----------------------------------------------------------+ +| System Power Supply | 225W | ++-----------------------------------------------------+----------------------------------------------------------+ +| Operating System | Ubuntu 16.04, 18.04 | ++-----------------------------------------------------+----------------------------------------------------------+ +| | CentOS 7.4, 7.5 | ++-----------------------------------------------------+----------------------------------------------------------+ +| | RHEL 7.4, 7.5 | ++-----------------------------------------------------+----------------------------------------------------------+ +| CPU | Intel i3/i5/i7/i9/Xeon 64-bit CPU | ++-----------------------------------------------------+----------------------------------------------------------+ +| GPU (Optional to accelerate quantization) | NVIDIA GPU with a compute capability > 3.0 | ++-----------------------------------------------------+----------------------------------------------------------+ +| CUDA Driver (Optional to accelerate quantization) | nvidia-410 | ++-----------------------------------------------------+----------------------------------------------------------+ +| FPGA | Xilinx Alveo U200 or U250 | ++-----------------------------------------------------+----------------------------------------------------------+ +| Docker Version | 19.03.1 | ++-----------------------------------------------------+----------------------------------------------------------+ + +Hardware setup and docker build +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +1. Clone the Vitis AI repository: + :: + + + git clone --recurse-submodules https://github.com/Xilinx/Vitis-AI + +2. Install Docker, and add the user to the docker group. Link the user + to docker installation instructions from the following docker's + website: + + - https://docs.docker.com/install/linux/docker-ce/ubuntu/ + - https://docs.docker.com/install/linux/docker-ce/centos/ + - https://docs.docker.com/install/linux/linux-postinstall/ + +3. Any GPU instructions will have to be separated from Vitis AI. Review comment: The clue is that you will have to setup the GPU yourself if you want to use Vitis-AI with GPU (for accelerated quantization) but I agree it's not clear indeed. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
