lizhen2017 opened a new issue #12583: mx.contrib.tensorrt.tensorrt_bind result 
error
URL: https://github.com/apache/incubator-mxnet/issues/12583
 
 
   Environment: Mxnet 1.3.0 Cuda 9.0 cudnn 7.1 tensorrt 4.0 ubuntu 16.04.
   (1) Inference with tensorrt, result bad.
   os.environ['MXNET_USE_TENSORRT'] = '1'
   arg_params.update(aux_params)
   all_params = dict([(k, v.as_in_context(mx.gpu(1))) for k, v in 
arg_params.items()])
   #print(all_params)
   executor = mx.contrib.tensorrt.tensorrt_bind(sym, ctx=mx.gpu(1), 
all_params=all_params,
                                               data=batch_shape, 
grad_req='null', force_rebind=True)
   (2) Inference without tensorrt, result right.
   #executor = sym.simple_bind(ctx=mx.gpu(1), data=batch_shape, 
grad_req='null', force_rebind=True)
   #executor.copy_params_from(arg_params, aux_params)
   (3) Inference using gluon api, result is right.
   model = gluon.nn.SymbolBlock(outputs=mx.sym.load('yolov3_head.json'), 
inputs=mx.sym.var('data'))
   model.load_params('yolov3_head.params', ctx=ctx)
   Why does using tensorrt api get bad result? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to