zhaoyang-star commented on PR #11208:
URL: https://github.com/apache/tvm/pull/11208#issuecomment-1280220884

   > Hi @zhaoyang-star, thanks for taking a look, its great to see this pass 
being used elsewhere. The pass currently expects the input to be a module of 
primitive functions so I would suggest running `AnnotateUsedMemory` after 
`FuseOps` similar to:
   > 
   > ```
   > mod = relay.transform.InferType()(mod)
   > mod = relay.transform.FuseOps()(mod)
   > mod = relay.transform.InferType()(mod)
   > mod = relay.transform.ToANormalForm()(mod)
   > mod = relay.transform.InferType()(mod)
   > mod = AnnotateUsedMemory()(mod)
   > ```
   > 
   > I did try running your example locally with the above change and this 
produced the relevant `used_memory` annotations. However, it looks like there 
is an issue while building the module after having run the `AnnotateUsedMemory` 
pass. Without digging too much into it I would suspect it's because this pass 
wasn't considered for the graph executor; only for the AOT executor. I believe 
changes similar to #11091 would be needed in the graph executor to support 
A-normal form. Hope this helps :)
   
   I want to confirm: Did you reproduce the issue( no `used_memory` attr in the 
output log) using my script above? If you ran all right, could you please share 
your script? There is only one `io_used_memory` attr and no `used_memory` attr 
found after running my script.
   
   If I placed the FuseOps before AnnotateUsedMemory just as you showed, there 
is a error `Check failed: (tensor_type) is false:`. You have mentioned maybe we 
should support ANF in graph executor to solve the error.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to