apeskov commented on PR #11642:
URL: https://github.com/apache/tvm/pull/11642#issuecomment-1157629476

   > But I think it's better to do the layout transform in AlterOpLayout.
   
   Absolutely agree with you! Automatic reordering is just a fallback mechanics 
which prevent DNNL from using "ref:any" implementations. "TensorRequisite" 
mechanic was designed specially to do nothing in case of matched layouts, and 
smooth out performance degradation if layout was not specified properly.
   
   Current mechanic with "get_optimal_layout_for_XXX" has a lot pot limitations 
and may works incorrect for some cases. So auto reordering is still needed.
   
   In context of enabling `sum post op`. Absolutely agree with you again! This 
is important feature, specially in case of latency optimisation on system with 
big cores count. As you see I'm working on that. This patch contain "stub" 
inlace support, because a lot of changes required on level of TVM 
GraphExecutor/VM memory plan manager. In-place memory modification support 
should be added to TVM as common feature. Otherwise it will looks like 
hack/workaround only for DNNL runtime.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to