yelite commented on code in PR #15837:
URL: https://github.com/apache/tvm/pull/15837#discussion_r1342717725
##########
python/tvm/contrib/cutlass/build.py:
##########
@@ -862,10 +862,13 @@ def handle_matmul(self, f, op_type):
def handle_attention(self, f, op_type):
"""Annotate an attention op."""
signature = _extract_relax_function_signature(f)
+
if _get_call_node(f.body, "relax.nn.attention") is not None:
op_attrs = _get_call_node(f.body, "relax.nn.attention").attrs
elif _get_call_node(f.body, "relax.nn.attention_bias") is not None:
op_attrs = _get_call_node(f.body, "relax.nn.attention_bias").attrs
+ elif _get_call_node(f.body, "relax.nn.attention_var_len") is not None:
Review Comment:
I am wondering if there is flexibility in Relax ops system to unify all
these attention variations. The current approach of having a different op name
for each variation is okay, but doesn't seem to be scalable if we want to
introduce more variations (like var_len + bias, customized mask)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]