kevinthesun commented on a change in pull request #5306: [Torch] Support Python 
list, more realistic recurrent networks
URL: https://github.com/apache/incubator-tvm/pull/5306#discussion_r407269094
 
 

 ##########
 File path: python/tvm/relay/frontend/pytorch.py
 ##########
 @@ -103,11 +180,29 @@ def _impl(inputs, input_types):
         return _op.transform.expand_dims(data, int(axis), 1)
     return _impl
 
-def _concatenate():
+
+def _concatenate(prelude):
+    def tensor_array_concat(lst, axis):
+        assert axis == 0, "Tensor array concat supported only for axis 0"
+        shape = _infer_type_with_prelude(prelude.hd(lst), prelude).shape
+        concat_shape = (Any(),) + tuple(shape[1:])
+
+        tensor_array = _map_tensor_array_constructor(lst, prelude, shape)
+        static_tensor_array_ops = StaticTensorArrayOps(prelude, "float32", 
concat_shape)
 
 Review comment:
   We should register shape instead of concat_shape. Take a look at tensor 
array concat in tensorflow frontend: 
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/tensorflow.py#L1090-L1100
 
   The reason is that if shape is fully static, we don't need an unnecessary 
shape function and runtime memory allocation. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to