comaniac commented on a change in pull request #7496:
URL: https://github.com/apache/tvm/pull/7496#discussion_r595357040



##########
File path: python/tvm/topi/cuda/dense.py
##########
@@ -39,14 +39,11 @@ def dense_cublas(cfg, data, weight, bias=None, 
out_dtype=None):
     if out_dtype is None:
         out_dtype = data.dtype
     assert out_dtype == data.dtype, "Mixed precision not supported."
-    batch, in_dim = data.shape
-    out_dim, _ = weight.shape
+    batch, in_dim = get_const_tuple(data.shape)
+    out_dim, _ = get_const_tuple(weight.shape)
     matmul = cublas.matmul(data, weight, False, True)
-    if isinstance(batch, int):
+    if all(isinstance(d, int) for d in [batch, in_dim, out_dim]):

Review comment:
       ditto

##########
File path: python/tvm/relay/frontend/tensorflow.py
##########
@@ -919,13 +930,31 @@ def _impl(inputs, attr, params, mod):
         input_y = inputs[1]
         orig_shape_x = _infer_shape(input_x, mod)
         orig_shape_y = _infer_shape(input_y, mod)
+        ndim = len(orig_shape_x)
+
+        is_static = not check_symbolic_shape(orig_shape_x)
+
+        if len(orig_shape_x) > 3 and not is_static:

Review comment:
       ```suggestion
           if ndim > 3 and not is_static:
   ```
   ditto to the rest.

##########
File path: python/tvm/topi/cuda/batch_matmul.py
##########
@@ -161,7 +161,8 @@ def batch_matmul_cublas(cfg, x, y, out_shape=None):
     """
     b, m, k = x.shape
     b, n, k = y.shape
-    cfg.add_flop(b * m * k * n * 2)
+    if all([isinstance(s, int) for s in [b, m, n, k]]):

Review comment:
       The type of tensor shape is `Array<Integer>` so `s` could be `IntImm`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to