Environment:mxnet-tensorrt-cu90 tensorrt 4.0, cuda9.0, cudnn 7.1 python3.5
(1)Inference with sym.simple_bind got right detection result.
executor = sym.simple_bind(ctx=ctx, data=batch_shape, grad_req='null',
force_rebind=True)
executor.copy_params_from(arg_params, aux_params)
(2) Inference with mx.contrib.tensorrt.tensorrt_bind got bad result
os.environ['MXNET_USE_TENSORRT'] = '1'
arg_params.update(aux_params)
all_params = dict([(k, v.as_in_context(mx.gpu(1))) for k, v in
arg_params.items()])
executor = mx.contrib.tensorrt.tensorrt_bind(sym, ctx=mx.gpu(1),
all_params=all_params,
data=batch_shape,
grad_req='null', force_rebind=True)
(3)Inference with gluon got right detection result.
self.__model = gluon.nn.SymbolBlock(outputs=mx.sym.load(self.__symbol),
inputs=mx.sym.var('data'))
self.__model.load_parameters(self.__params, ctx=self.__ctx)
I don't know why.Thanks for your help! Will using tensort makes a lower
accuracy?
[ Full content available at:
https://github.com/apache/incubator-mxnet/issues/12598 ]
This message was relayed via gitbox.apache.org for [email protected]