KellenSunderland commented on a change in pull request #14860: [Review, don't 
merge before 1.5] Update TRT tutorial with new APIs
URL: https://github.com/apache/incubator-mxnet/pull/14860#discussion_r291717679
 
 

 ##########
 File path: docs/tutorials/tensorrt/inference_with_trt.md
 ##########
 @@ -17,29 +17,23 @@
 
 # Optimizing Deep Learning Computation Graphs with TensorRT
 
-NVIDIA's TensorRT is a deep learning library that has been shown to provide 
large speedups when used for network inference. MXNet 1.3.0 is shipping with 
experimental integrated support for TensorRT. This means MXNet users can noew 
make use of this acceleration library to efficiently run their networks. In 
this blog post we'll see how to install, enable and run TensorRT with MXNet.  
We'll also give some insight into what is happening behind the scenes in MXNet 
to enable TensorRT graph execution.
+NVIDIA's TensorRT is a deep learning library that has been shown to provide 
large speedups when used for network inference. MXNet 1.5.0 and later versions 
ship with experimental integrated support for TensorRT. This means MXNet users 
can now make use of this acceleration library to efficiently run their 
networks. In this tutorial we'll see how to install, enable and run TensorRT 
with MXNet.  We'll also give some insight into what is happening behind the 
scenes in MXNet to enable TensorRT graph execution.
 
 ## Installation and Prerequisites
-Installing MXNet with TensorRT integration is an easy process. First ensure 
that you are running Ubuntu 16.04, that you have updated your video drivers, 
and you have installed CUDA 9.0 or 9.2.  You'll need a Pascal or newer 
generation NVIDIA gpu.  You'll also have to download and install TensorRT 
libraries [instructions 
here](https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html).
  Once your these prerequisites installed and up-to-date you can install a 
special build of MXNet with TensorRT support enabled via PyPi and pip.  Install 
the appropriate version by running:
+Installing MXNet with TensorRT integration is an easy process. First ensure 
that you are running Ubuntu 18.04, and that you have updated your video 
drivers, and you have installed CUDA 10.0.  You'll need a Pascal or newer 
generation NVIDIA GPU.  You'll also have to download and install TensorRT 
libraries [instructions 
here](https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html).
  Once you have these prerequisites installed and up-to-date you can install a 
special build of MXNet with TensorRT support enabled via PyPi and pip.  Install 
the appropriate version by running:
 
 Review comment:
   The CI is currently targeting 10.0 but it should work with newer versions.  
I'll update to say 10 or newer.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to