ANSHUMAN87 commented on a change in pull request #6580:
URL: https://github.com/apache/incubator-tvm/pull/6580#discussion_r501862525
##########
File path: python/tvm/tir/ir_builder.py
##########
@@ -74,8 +75,23 @@ def asobject(self):
def dtype(self):
return self._content_type
+ def _linear_index(self, index):
+ if not isinstance(index, tuple):
Review comment:
We can also check for _shape is None, then return here.
##########
File path: python/tvm/tir/ir_builder.py
##########
@@ -62,9 +62,10 @@ class BufferVar(ObjectGeneric):
IRBuilder.allocate
"""
- def __init__(self, builder, buffer_var, content_type):
+ def __init__(self, builder, buffer_var, shape, content_type):
Review comment:
Request to add small comment section to describe new way to use it.
As the new cases are 2, either with already linear index or actual index. It
will help understand the new change better.
##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -207,3 +207,28 @@ def _csr_transpose_ir(data, indices, indptr, out_data,
out_indices, out_indptr):
last[0] = temp2[0]
return irb.get()
+
+
[email protected]_func
+def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
Review comment:
Any reason to use _ naming for inputs?
##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -76,7 +76,36 @@ TVM_REGISTER_GLOBAL("relay.op.nn._make.sparse_dense")
});
RELAY_REGISTER_OP("nn.sparse_dense")
- .describe(R"code(Applies a sparse linear transformation: :math:`Y = XW^T`
with X sparse.
+ .describe(R"code(Applies a sparse linear transformation: :math:`Y = XW^T`
with W sparse.
+
+- **data**: `(x1, x2, ..., xn, input_dim)`
+- **weight**: `(units, input_dim)`
+- **out**: `(x1, x2, ..., xn, units)`.
+
+)code" TVM_ADD_FILELINE)
+ .set_attrs_type<SparseDenseAttrs>()
+ .set_num_inputs(4)
+ .add_argument("data", "nD Tensor", "Input data.")
+ .add_argument("weight_data", "1D Tensor", "Weight data matrix.")
+ .add_argument("weight_indices", "1D Tensor", "Weight indices matrix.")
+ .add_argument("weight_indptr", "1D Tensor", "Weight indptr matrix.")
+ .set_support_level(1)
+ .add_type_rel("SparseDense", SparseDenseRel);
+
+Expr MakeSparseDensePadded(Expr data, Expr weight_data, Expr weight_indices,
Expr weight_indptr) {
+ auto attrs = make_object<SparseDenseAttrs>();
+ static const Op& op = Op::Get("nn.sparse_dense_padded");
+ return Call(op, {data, weight_data, weight_indices, weight_indptr},
Attrs(attrs), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op.nn._make.sparse_dense_padded")
+ .set_body([](const TVMArgs& args, TVMRetValue* rv) {
+ runtime::detail::unpack_call<Expr, 4>(MakeSparseDensePadded, args, rv);
+ });
+
+RELAY_REGISTER_OP("nn.sparse_dense_padded")
Review comment:
In which case this Op will be used independently?
##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -207,3 +207,28 @@ def _csr_transpose_ir(data, indices, indptr, out_data,
out_indices, out_indptr):
last[0] = temp2[0]
return irb.get()
+
+
[email protected]_func
+def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
Review comment:
Ok, got it. Thanks for clarifying!
We can also use macro like below
# pylint: disable=
def func():
##########
File path: python/tvm/topi/nn/sparse.py
##########
@@ -207,3 +207,28 @@ def _csr_transpose_ir(data, indices, indptr, out_data,
out_indices, out_indptr):
last[0] = temp2[0]
return irb.get()
+
+
[email protected]_func
+def sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type):
Review comment:
Ok, got it. Thanks for clarifying!
We can also use macro like below
#pylint: disable=
def func():
##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -76,7 +76,36 @@ TVM_REGISTER_GLOBAL("relay.op.nn._make.sparse_dense")
});
RELAY_REGISTER_OP("nn.sparse_dense")
- .describe(R"code(Applies a sparse linear transformation: :math:`Y = XW^T`
with X sparse.
+ .describe(R"code(Applies a sparse linear transformation: :math:`Y = XW^T`
with W sparse.
+
+- **data**: `(x1, x2, ..., xn, input_dim)`
+- **weight**: `(units, input_dim)`
+- **out**: `(x1, x2, ..., xn, units)`.
+
+)code" TVM_ADD_FILELINE)
+ .set_attrs_type<SparseDenseAttrs>()
+ .set_num_inputs(4)
+ .add_argument("data", "nD Tensor", "Input data.")
+ .add_argument("weight_data", "1D Tensor", "Weight data matrix.")
+ .add_argument("weight_indices", "1D Tensor", "Weight indices matrix.")
+ .add_argument("weight_indptr", "1D Tensor", "Weight indptr matrix.")
+ .set_support_level(1)
+ .add_type_rel("SparseDense", SparseDenseRel);
+
+Expr MakeSparseDensePadded(Expr data, Expr weight_data, Expr weight_indices,
Expr weight_indptr) {
+ auto attrs = make_object<SparseDenseAttrs>();
+ static const Op& op = Op::Get("nn.sparse_dense_padded");
+ return Call(op, {data, weight_data, weight_indices, weight_indptr},
Attrs(attrs), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op.nn._make.sparse_dense_padded")
+ .set_body([](const TVMArgs& args, TVMRetValue* rv) {
+ runtime::detail::unpack_call<Expr, 4>(MakeSparseDensePadded, args, rv);
+ });
+
+RELAY_REGISTER_OP("nn.sparse_dense_padded")
Review comment:
I am okay with the disclaimer :)
If it is really not necessary, may be we don't expose this as global Op.
I am not too sure. Better we get some more opinion on this point.
##########
File path: src/relay/op/nn/sparse.cc
##########
@@ -76,7 +76,36 @@ TVM_REGISTER_GLOBAL("relay.op.nn._make.sparse_dense")
});
RELAY_REGISTER_OP("nn.sparse_dense")
- .describe(R"code(Applies a sparse linear transformation: :math:`Y = XW^T`
with X sparse.
+ .describe(R"code(Applies a sparse linear transformation: :math:`Y = XW^T`
with W sparse.
+
+- **data**: `(x1, x2, ..., xn, input_dim)`
+- **weight**: `(units, input_dim)`
+- **out**: `(x1, x2, ..., xn, units)`.
+
+)code" TVM_ADD_FILELINE)
+ .set_attrs_type<SparseDenseAttrs>()
+ .set_num_inputs(4)
+ .add_argument("data", "nD Tensor", "Input data.")
+ .add_argument("weight_data", "1D Tensor", "Weight data matrix.")
+ .add_argument("weight_indices", "1D Tensor", "Weight indices matrix.")
+ .add_argument("weight_indptr", "1D Tensor", "Weight indptr matrix.")
+ .set_support_level(1)
+ .add_type_rel("SparseDense", SparseDenseRel);
+
+Expr MakeSparseDensePadded(Expr data, Expr weight_data, Expr weight_indices,
Expr weight_indptr) {
+ auto attrs = make_object<SparseDenseAttrs>();
+ static const Op& op = Op::Get("nn.sparse_dense_padded");
+ return Call(op, {data, weight_data, weight_indices, weight_indptr},
Attrs(attrs), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op.nn._make.sparse_dense_padded")
+ .set_body([](const TVMArgs& args, TVMRetValue* rv) {
+ runtime::detail::unpack_call<Expr, 4>(MakeSparseDensePadded, args, rv);
+ });
+
+RELAY_REGISTER_OP("nn.sparse_dense_padded")
Review comment:
I think "nn.internal.xxx" is a good option.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]