guyzsarun opened a new issue #7563:
URL: https://github.com/apache/tvm/issues/7563


   When autotuning with batch_size more than 1 and opt_level=3 with mxnet model 
multiple outputs, I encountered some accuracy drop in some of the output.
   opt_level 0, 1 and 2 seems to be fine
   
   **Environment**
   TVM : 0.7.0
   Mxnet: 1.6.0
   OS: Ubuntu 18.04.4
   
   **Code to reproduce**
   ``` python
   import mxnet as mx
   import tvm
   import tvm.relay as relay
   import numpy as np
   from numpy import dot
   from numpy.linalg import norm
   from tvm.contrib import graph_runtime
   
   model_name="model"
   opt_level=3
   
   input_shape = [2, 3, 160, 160]
   
   sym, arg_params, aux_params = mx.model.load_checkpoint(model_name, 0)
   
   input_dict={"data":input_shape}
   
   mod, param = relay.frontend.from_mxnet(
           sym, input_dict, arg_params=arg_params, aux_params=aux_params
   )
   ctx = tvm.cpu()
   
   with tvm.transform.PassContext(opt_level=opt_level):
       graph, lib, params = relay.build(mod, target='llvm', params=param,)
   
   tvm_json_path=model_name+"_tvm.json"
   tvm_lib_path=model_name+"_tvm.so"
   tvm_params_path=model_name+"_tvm.params"
   
   with open(tvm_params_path, "wb") as fo:
       fo.write(relay.save_param_dict(params))
   with open(tvm_json_path, "w") as fo:
       fo.write(graph)
   lib.export_library(tvm_lib_path)
   ```
   
   **Comparing Accuracy**
   ``` python
   graph = open(tvm_json_path).read()
   lib = tvm.runtime.load_module(tvm_lib_path)
   params = bytearray(open(tvm_params_path, "rb").read())
   
   data = np.random.rand(2,3,160,160)
   ctx = tvm.cpu(0)
   module = graph_runtime.create(graph, lib, ctx)
   module.load_params(params)
   
   module.set_input('data', data)
   module.run()
   
   output = []
   for i in range(module.get_num_outputs()):
       prediction = module.get_output(i).asnumpy()
   
       output.append(prediction.flatten())
   
   ctx = mx.cpu()
   sym, arg_params, aux_params = mx.model.load_checkpoint(model_name, 0)
   
   arg_params["data"] = mx.nd.array(data.reshape(input_shape))
   
   exe = sym.bind(ctx=ctx, args=arg_params, aux_states=aux_params, 
grad_req="null")
   exe.forward()
   
   output_mx = []
   for pred in exe.outputs:
       output_mx.append(pred.asnumpy().flatten())
   
   for i in range(len(output_mx)):
       cosim = dot(output[i], output_mx[i]) / (
           norm(output[i]) * norm(output_mx[i])
       )
       print("Similarity Score for output {0} : {1:.2f} ".format(i + 1, cosim))
   ```
   Results
   ```
   Similarity Score for output 1 : 1.00 
   Similarity Score for output 2 : 1.00 
   Similarity Score for output 3 : 0.72 
   Similarity Score for output 4 : 1.00 
   Similarity Score for output 5 : 1.00 
   Similarity Score for output 6 : 0.59 
   Similarity Score for output 7 : 0.99 
   Similarity Score for output 8 : 1.00 
   Similarity Score for output 9 : 0.85 
   ```
   
   Link to [tvm 
discuss](https://discuss.tvm.apache.org/t/bug-performance-drop-with-batch-and-opt-level-3/9193)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to