honghuichao opened a new issue, #13387:
URL: https://github.com/apache/tvm/issues/13387

   I tested the tvm and found some bugs, as follows:
   (1)in mxnet frontend.
   def _mx_pad(inputs, attrs):
       pad_mode = attrs.get_str("mode", None)
       if pad_mode is None:
           raise tvm.error.OpAttributeRequired('Attribute "mode" not found in 
operator pad.')
       if pad_mode not in ["constant", "edge", "reflect"]:
           raise tvm.error.OpAttributeInvalid("Value " + mode + ' in attribute 
"mode" is not valid')
   
      raise tvm.error.OpAttributeInvalid("Value " + mode + ' in attribute 
"mode" is not valid') mode  is  undeclared.
   
   
   (2)cudnn.py in /python/tvm/contribute:
       idx = -1
       if algo_type == "fwd":
           idx = _FWD_ALGOS.index(algo_name)
       elif algo_type == "bwd_filter":
           idx = _BWD_FILTER_ALGOS.index(algo_name)
       elif algo_type == "bwd_data":
           idx = _BWD_DATA_ALGOS.index(algo_name)
    _BWD_DATA_ALGOS and _BWD_FILTER_ALGOS undefine.
   
   (3)
   @script
   def _mirror_pad_func(data_shape, pad_width):
       out = output_tensor((data_shape.shape[0],), "int64")
       for i in const_range(data_shape.shape[0]):
           out[i] = data_shape[i] + int64(pad_width[i][0]) + 
int64(pad_width[i][1])
       return out
   
    output_tensor is not imported in this module. I thik tvm can import 
tvm.te.hybrid.call /output_tensor/ in releated module for all program using 
hybrid.
   
   (4) in te_complier.py
   
   def get_shape(shape):
       """Convert the shape to correct dtype and vars."""
       ret = []
       for dim in shape:
           if isinstance(dim, tvm.tir.IntImm):
               if libinfo()["INDEX_DEFAULT_I64"] == "ON":
                   ret.append(dim)
               else:
                   val = int(dim)
                   assert val <= np.iinfo(np.int32).max
                   ret.append(tvm.tir.IntImm("int32", val))
           elif isinstance(dim, tvm.tir.Any):
               ret.append(te.size_var("any_dim", "int32"))
           else:
               ret.append(dim)
       return ret
   np is not define in this module.
   
   (5) in tvm/contrib/mps:
   def matmul(lhs, rhs, transa=False, transb=False):
       """Create an extern op that compute matrix mult of A and rhs with CrhsLAS
       This function serves as an example on how to calle external libraries.
       Parameters
       ----------
       lhs : Tensor
           The left matrix operand
       rhs : Tensor
           The right matrix operand
       transa : bool
           Whether transpose lhs
       transb : bool
           Whether transpose rhs
       Returns
       -------
       C : Tensor
           The result tensor.
       """
       m = lhs.shape[0] if transa is False else lhs.shape[1]
       n = rhs.shape[1] if transb is False else rhs.shape[0]
       if transa:
           m = b
       if transb:
           n = c
   b/c is not define.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to