Caenorst commented on a change in pull request #14860: Update TRT tutorial with
new APIs
URL: https://github.com/apache/incubator-mxnet/pull/14860#discussion_r313980665
##########
File path: docs/tutorials/tensorrt/inference_with_trt.md
##########
@@ -83,26 +76,23 @@ end = time.time()
print(time.process_time() - start)
```
-For this experiment we are strictly interested in inference performance, so to
simplify the benchmark we'll pass a tensor filled with zeros as an input. We
then bind a symbol as usual, returning a normal MXNet executor, and we run
forward on this executor in a loop. To help improve the accuracy of our
benchmarks we run a small number of predictions as a warmup before running our
timed loop. This will ensure various lazy operations, which do not represent
real-world usage, have completed before we measure relative performance
improvement. On a modern PC with a Titan V GPU the time taken for our MXNet
baseline is **33.73s**. Next we'll run the same model with TensorRT enabled,
and see how the performance compares.
-
-While TensorRT integration remains experimental, we require users to set an
environment variable to enable graph compilation. You can see that at the
start of this test we explicitly disabled TensorRT graph compilation support.
Next, we will run the same predictions using TensorRT. This will require us to
explicitly enable the MXNET_USE_TENSORRT environment variable, and we'll also
use a slightly different API to bind our symbol.
+We are interested in inference performance, so to simplify the benchmark we'll
pass a tensor filled with zeros as an input. We bind a symbol as usual,
returning an MXNet executor, and we run forward on this executor in a loop. To
help improve the accuracy of our benchmarks we run a small number of
predictions as a warmup before running our timed loop. On a modern PC with an
RTX 2070 GPU the time taken for our MXNet baseline is **17.20s**. Next we'll
run the same model with TensorRT enabled, and see how the performance compares.
## MXNet with TensorRT Integration Performance
```python
# Execute with TensorRT
print('Building TensorRT engine')
-os.environ['MXNET_USE_TENSORRT'] = '1'
-arg_params.update(aux_params)
-all_params = dict([(k, v.as_in_context(mx.gpu(0))) for k, v in
arg_params.items()])
-executor = mx.contrib.tensorrt.tensorrt_bind(sym, ctx=mx.gpu(0),
all_params=all_params,
- data=batch_shape,
grad_req='null', force_rebind=True)
+trt_sym = sym.get_backend_symbol('TensorRT')
+mx.contrib.tensorrt.init_tensorrt_params(trt_sym, arg_params, aux_params)
Review comment:
The inputs `arg_params` and `aux_params` being modified by
`init_tensorrt_params` is actually an unwanted behavior that I'm intending to
fix, please use the returned `arg_params` / `aux_params`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services