wxyhv opened a new issue #7762:
URL: https://github.com/apache/tvm/issues/7762


   After I complete tune module, I build the tvm module,set_input() and run, 
then I use GraphModule.get_out() funtion to get TVM compute result,  the result 
is a class named "tvm.runtime.ndarray.NDArray". 
   However, I need 'numpy.ndarray' to do the following operations,such as get 
data by index,numpy.expand_dims(),and so on. So I use the asnumpy() function to 
convert the data type, but the asnumpy() funtion costs a lot of time even 
surpass the TVM module run time! This make the whole inference time much big 
than "no TVM tuned DeepLearning model"
   The following is part of my inference code, which indicate how the asnumpy 
enlarge the whole inference time!
   `      # set input_data
           self.tuned_module.set_input("Input", data)
           # execute
           self.tuned_module.run()
           # get outputs
           # result = self.tuned_module.get_output(
           #     0, tvm.nd.empty(self.output_shape, "float32"))
           result = self.tuned_module.get_output(0)
           print(type(logits)) # <class 'tvm.runtime.ndarray.NDArray'>
   
           t1 = time.time()
           logits = logits.asnumpy()  # 
           t2 = time.time()
           print("asnumpy_time:", t2-t1) #255ms
           print(type(logits)) # <class 'numpy.ndarray'>
   
           result = result[:, 0, :]
           result = numpy.expand_dims(result, 1)
   `
   
   Consequently, my questions are:
   1. Why the TVM use "tvm.runtime.ndarray.NDArray" to compute data, rather 
than numpy library? The numpy library have many useful funtion to process data 
which can't find in "tvm.runtime.ndarray.NDArray".
   
   2. How to use numpy to compute directly ranther than  tvm NDarray()?  This 
will save much time during the whole DeepLearning inference
   
   3. How to get the TVM run (or compute) result in the memery address, to 
prevent  convert & copy result to NDarray again(the input data already convertd 
to NDarray by set_input())? 
   
   Looking forward to your reply! 
   Thank you very much!
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to