Environment: Mxnet 1.3.0 Cuda 9.0 cudnn 7.1 tensorrt 4.0 ubuntu 16.04.
(1) Inference with tensorrt, result bad.
os.environ['MXNET_USE_TENSORRT'] = '1'
arg_params.update(aux_params)
all_params = dict([(k, v.as_in_context(mx.gpu(1))) for k, v in
arg_params.items()])
#print(all_params)
executor = mx.contrib.tensorrt.tensorrt_bind(sym, ctx=mx.gpu(1),
all_params=all_params,
data=batch_shape, grad_req='null',
force_rebind=True)
(2) Inference without tensorrt, result right.
#executor = sym.simple_bind(ctx=mx.gpu(1), data=batch_shape, grad_req='null',
force_rebind=True)
#executor.copy_params_from(arg_params, aux_params)
(3) Inference using gluon api, result is right.
model = gluon.nn.SymbolBlock(outputs=mx.sym.load('yolov3_head.json'),
inputs=mx.sym.var('data'))
model.load_params('yolov3_head.params', ctx=ctx)
Why does using tensorrt api get bad result?
[ Full content available at:
https://github.com/apache/incubator-mxnet/issues/12583 ]
This message was relayed via gitbox.apache.org for [email protected]