vvchernov commented on a change in pull request #8599:
URL: https://github.com/apache/tvm/pull/8599#discussion_r681657107



##########
File path: python/tvm/relay/frontend/onnx.py
##########
@@ -2142,58 +2142,72 @@ class LSTM(RNN):
     """Operator converter for LSTM"""
 
     @classmethod
-    def generate_lstm(
-        cls, X_steps, H_t, C_t, W, R, B, p_i, p_f, p_o, f_act, g_act, h_act, 
backwards=False
-    ):
-        """Create an unrolled lstm loop.
-
-        See https://github.com/onnx/onnx/blob/master/docs/Operators.md for 
math.
+    def unbind(cls, data, axis=0):
         """
-        h_list = []
-        seq_length = len(X_steps)
-        for i in range(seq_length):
-            step = X_steps[i] if not backwards else X_steps[seq_length - (i + 
1)]
-            step = _op.squeeze(step, axis=[0])
-            gates = _op.nn.dense(step, W) + _op.nn.dense(H_t, R)
-            if B is not None:
-                WB, RB = _op.split(B, 2)
-                gates += WB + RB
-            i, o, f, c = _op.split(gates, 4, axis=-1)
-
-            if p_i != 0:
-                i = f_act(i + p_i * C_t)
-            else:
-                i = f_act(i)
-
-            if p_f != 0:
-                f = f_act(f + p_f * C_t)
-            else:
-                f = f_act(f)
+        Unbind was gotten from pytorch.py and modified. The operation removes 
a tensor dimension
+        and returns a tuple of all slices along a given dimension, already 
without it.
+        TODO (vvchernov): It needs such operation on relay side to reduce time 
consumption
+        on squeeze operation.
 
-            c = g_act(c)
-            C = f * C_t + i * c
-            if p_o != 0:
-                o = f_act(o + p_o * C)
-            else:
-                o = f_act(o)
-
-            H = o * h_act(C)
+        Parameters
+        ----------
+        data : relay.Expr
+            Input tensor
+        axis : int
+            Axis along which tensor is splited. Tensors in the list has not 
this axis.

Review comment:
       Yes. You can see 
https://pytorch.org/docs/stable/generated/torch.unbind.html, it is repeat torch 
op.
   This operation is reverse to relay stack op. For split we have such pair, it 
is concatinate op.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to