tkonolige commented on a change in pull request #6889:
URL: https://github.com/apache/tvm/pull/6889#discussion_r530599568



##########
File path: python/tvm/topi/cuda/sparse.py
##########
@@ -362,6 +363,7 @@ def _alter_sparse_dense_layout(_attrs, inputs, _tinfos, 
_out_type):
     sparse_dense implementation for one that operates on a padded matrix. We
     also padd the matrix.
     """
+    # TODO(ANSHUMAN87): Handle for sparse_data case too

Review comment:
       This should probably say "sparse_lhs".

##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -1993,17 +1993,27 @@ def batch_matmul(x, y):
     return _make.batch_matmul(x, y)
 
 
-def sparse_dense(data, weight):
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_dense(dense_mat, sparse_mat, sparse_lhs=False):
     r"""
-    Computes the matrix multiplication of `data` and `weight`, where `data` is
-    a dense matrix and `weight` is a sparse (either BSR or CSR) namedtuple with
+    Computes the matrix multiplication of `dense_mat` and `sparse_mat`, where 
`dense_mat` is
+    a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple 
with
     fields `data`, `indices`, and `indptr`.
 
     .. math::
 
-        \mbox{sparse_dense}(data, weight)[m, n] = \mbox{matmul}(x, 
\mbox{as_dense}(weight)^T)[m, n]
+       if sparse_lhs=True

Review comment:
       Is this going to render correctly inside of a math block?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to