chunit-quic opened a new pull request #9723:
URL: https://github.com/apache/tvm/pull/9723


   * Add a common span filling feature for tf1/2, tflite and pytorch.
   * Add test case for Span filling in each frontend.
   * Expose Tuple and TupleGetItem to python end
   
   Hi community,
   
   Here is a pull request about span filling for frontends -> relay
   (frontedns: TF 1 and 2, tfltie and pytorch)
   This feature could help users to track the conversion more precisely
   I would like to descript more about how it works and current status below. :D
   
   1. One to many conversion
   First, though there is a span_span function for tensorflow and tensorflow2, 
some spans are still empty from time to time.
   One of the reasons is that an op conversion might be an one to many 
conversion.
   In this situation the intermediate ops result in empty string.
   Take the 
[pack](https://github.com/apache/tvm/blob/2b35cfd6ddb73afecd3f550f33881e1fdc7c3267/python/tvm/relay/frontend/tensorflow_ops.py#L1535)
 conversion for example, there are several expand_dims ops might be added 
before concatenate.
   Via adding a ExprMutator to traverse expression each time when an op is 
converted, we should get a full span tagged RelayIR.
   
   Here gives a simple example:
   Before modification, the test case in this patch 
(tensorflow/test_forward.py:320) is converted to the following Relay expressions
   
   > def @main(%input: Tensor[(?, ?, 3, 1), float32]) {
   >   %113 = shape_of(%input, dtype="int32") /* Shape */;
   >   %114 = strided_slice(%113, begin=[0], end=[1], strides=[1], axes=None);
   >   %115 = squeeze(%114) /* strided_slice */;
   >   %116 = expand_dims(%115, axis=0);
   >   %117 = expand_dims(3, axis=0);
   >   %118 = expand_dims(3, axis=0);
   >   %119 = (%116, %117, %118);
   >   %120 = concatenate(%119) /* stack */;
   >   dyn.reshape(%input, %120, newshape=[]) /* output */
   > }
   
   With this patch we can obtain the following format.
   
   > def @main(%input: Tensor[(?, ?, 3, 1), float32]) {
   >   %10 = shape_of(%input, dtype="int32") /* Shape */;
   >   %11 = strided_slice(%10, begin=[0], end=[1], strides=[1], axes=None) /* 
strided_slice */;
   >   %12 = squeeze(%11) /* strided_slice_DERIVED_0 */;
   >   %13 = expand_dims(%12, axis=0) /* stack */;
   >   %14 = expand_dims(3, axis=0) /* stack_DERIVED_0 */;
   >   %15 = expand_dims(3, axis=0) /* stack_DERIVED_1 */;
   >   %16 = (%13, %14, %15) /* stack_DERIVED_2 */;
   >   %17 = concatenate(%16) /* stack_DERIVED_3 */;
   >   dyn.reshape(%input, %17, newshape=[]) /* output */
   > }
   
   Note that it slightly differs from the original format. The "stak" span is 
tagged in the expand_dims in the very begging, which is just like the 
conversion steps of _pack op.
   
   2. span naming for each frontend
     2.1. TensorFlow (1 and 2) naming: is kept the same.
     2.2. tflite naming: is a combination of its op position index and output 
tensor name(s).
         op position is good enghough to map back the tflite.
         And the output tensor name should be helpful when user search the op 
in Netron
     2.3. Pytorch naming: Because PyTorch provides two kinds of graph, 
jit._trace.TopLevelTracedModule, and _C.Graph, two key attributes, kind() and 
scopeName() are recorded in a span.
         scopeName(), is used to map a Realy expression back to its original 
pytorch module part in jit._trace.TopLevelTracedModule, and _C.Graph.
         Combined with kind(), the position of node can be precisely located in 
_C.Graph.
   
   3. Limitation
     Few model in test_functional_models.py is still in investigation.
   
   4. Trivial
      Several test cases are attached. Should be a quick verifier for reviewing.
   
   Thank you for reading. Any comment is appreciated. :)
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to