comaniac commented on a change in pull request #9207:
URL: https://github.com/apache/tvm/pull/9207#discussion_r723656913



##########
File path: python/tvm/relay/op/strategy/generic.py
##########
@@ -715,8 +715,14 @@ def dilation2d_strategy(attrs, inputs, out_type, target):
     return strategy
 
 
+def maybe_copy_tensor_b(tensor_a, tensor_b):
+    if tensor_a == tensor_b:
+        return te.compute(tensor_a.shape, lambda *ind: tensor_a[ind], 
tag="tensor_b_copy")
+    return tensor_b
+
+
 # matmul
-def wrap_compute_matmul(topi_compute, need_auto_scheduler_layout=False):
+def wrap_compute_matmul(topi_compute, need_auto_scheduler_layout=False, 
need_tensor_b_copy=True):

Review comment:
       Yeah I understand this semantic (btw, IIUC, external libs won't use the 
TE compute anyways, so does it matter to copy it in the TE compute even for the 
external libs?)
   
   My comment here is mainly for the naming (if I misunderstood something and 
we still need this flag for external libs). What I meant was whether we should 
use a more general naming for this flag.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to