dvhg commented on issue #8233:
URL: https://github.com/apache/tvm/issues/8233#issuecomment-858918248


   That's a good point, I didn't think to check memory on CPU targets. Using 
llvm target, I also see memory usage increase with each inference. After about 
300 inferences, the python process consumes ~25% of my 128GB physical RAM. I 
noticed that the rate of increase seems to slow down but varies a lot depending 
on the input.
   
   I've also seen this happen with FasterRCNN.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to