masahi commented on a change in pull request #4964: [Torch] Add initial control 
flow support 
URL: https://github.com/apache/incubator-tvm/pull/4964#discussion_r387973444
 
 

 ##########
 File path: python/tvm/relay/frontend/pytorch.py
 ##########
 @@ -614,6 +615,55 @@ def _impl(inputs, input_types):
         return _op.tensor.sqrt(data)
     return _impl
 
+def _neg():
+    def _impl(inputs, input_types):
+        data = inputs[0]
+        return _op.tensor.negative(data)
+    return _impl
+
+def _tanh():
+    def _impl(inputs, input_types):
+        data = inputs[0]
+        return _op.tensor.tanh(data)
+    return _impl
+
+def _ge():
+    def _impl(inputs, input_types):
+        assert len(inputs) == 2
+        lhs = _wrap_const(inputs[0])
+        rhs = _wrap_const(inputs[1])
+        return _op.tensor.greater_equal(lhs, rhs)
+    return _impl
+
+def _gt():
+    def _impl(inputs, input_types):
+        assert len(inputs) == 2
+        lhs = _wrap_const(inputs[0])
+        rhs = _wrap_const(inputs[1])
+        return _op.tensor.greater(lhs, rhs)
+    return _impl
+
+def _lt():
+    def _impl(inputs, input_types):
+        assert len(inputs) == 2
+        lhs = _wrap_const(inputs[0])
+        rhs = _wrap_const(inputs[1])
+        return _op.tensor.less(lhs, rhs)
+    return _impl
+
+def _Bool():
+    def _impl(inputs, input_types):
 
 Review comment:
   Turned out this uncovered an interesting typing problem. It seems currently 
input_types[0] is float here, because I'm returning float for untyped tensors 
currently. See the diff at L988 
https://github.com/apache/incubator-tvm/pull/4964/files#diff-1d6fac756c4d51bbd68e6e3f326a4e3dR988-R991
   
   Since input tensor type is unknown if we use `torch.jit.script`, the output 
of `aten::gt`, which should be bool, hence the input to `aten::Bool` below, is 
unknown. 
   
   ```
   graph(%self : __torch__.SimpleIf,
         %inp.1 : Tensor):
     %2 : int = prim::Constant[value=1]()
     %3 : None = prim::Constant()
     %4 : float = prim::Constant[value=0]() # test_forward.py:844:27
     %5 : Tensor = aten::sum(%inp.1, %3) # test_forward.py:844:15
     %6 : Tensor = aten::gt(%5, %4) # test_forward.py:844:15
     %7 : bool = aten::Bool(%6) # test_forward.py:844:15
     %output : Tensor = prim::If(%7) # test_forward.py:844:12
       block0():
         %9 : Tensor = prim::GetAttr[name="weight"](%self)
         %output.1 : Tensor = aten::add(%9, %inp.1, %2) # test_forward.py:845:25
         -> (%output.1)
       block1():
         %11 : Tensor = prim::GetAttr[name="weight"](%self)
         %output.2 : Tensor = aten::sub(%11, %inp.1, %2) # 
test_forward.py:847:25
         -> (%output.2)
     return (%output)
   ```
   
   I changed the default type to float instead of None, because that seems more 
robust and ok for most use cases. I remember there is a way to annotate types 
in torch script, but I need to spend some time how that works. Do you have a 
better solution @alexwong? Otherwise I don't want to be blocked by this problem 
in this PR.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to