tsupei opened a new issue #7102:
URL: https://github.com/apache/tvm/issues/7102


   I try to use topi.nn.batch_matmul, following the instruction 
[here](https://tvm.apache.org/docs/api/python/topi.html). It requires input 
type `tvm.te.Tensor`. However, the following code incurs error.
   
   ```python3=
   import tvm
   import torch
   from tvm import topi
   from tvm import te
   
   def main():
   
       b = te.var("b")
       n = te.var("n")
       d = te.var("d")
   
       a = te.placeholder((b, n, d), dtype="float32", name="a")
       b = te.placeholder((b, n, d), dtype="float32", name="b")
   
       print(type(a))
   
       c = topi.nn.batch_matmul(a, b)
       print(c)
   
   if __name__ == "__main__":
       main()
   ```
   ```bash
   Traceback (most recent call last):
     File "bmm.py", line 19, in <module>
       main()
     File "bmm.py", line 15, in main
       c = topi.nn.batch_matmul(a, b)
     File 
"/home/jojo6174/tvm-installation/tvm/python/tvm/topi/nn/batch_matmul.py", line 
54, in batch_matmul
       batch = max(XB, YB)
     File "/home/jojo6174/tvm-installation/tvm/python/tvm/tir/expr.py", line 
176, in __bool__
       return self.__nonzero__()
     File "/home/jojo6174/tvm-installation/tvm/python/tvm/tir/expr.py", line 
172, in __nonzero__
       + "use tvm.tir.all / tvm.tir.any instead"
   ValueError: Cannot use and / or / not operator to Expr, hint: use 
tvm.tir.all / tvm.tir.any instead
   ```
   
   I found that in `tvm/python/tvm/topi/nn/batch_matmul.py`. A python built-in 
max is called, however, the argumenets type seems incompatible. I modified it 
to `te.max` solved the problem. Is there any misunderstanding here. Thanks!
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to