jtuyls commented on issue #10696:
URL: https://github.com/apache/tvm/issues/10696#issuecomment-1081800587


   This seems to be happening when tensorflow is being loaded inside pyxir. I 
don't know what the exact cause is for this starting to happen with tensorflow 
2.6 but the issue looks very similar to this one: 
https://github.com/triton-inference-server/server/issues/3777 but with 
tensorflow instead of pytorch.
   
   I have a workaround by loading tensorflow eagerly only when needed. I am 
currently verifying with the ci-cpu docker image locally and will create a PR 
when successful. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to