I don't have access to high-end hardware to run models but I have multiple 
personal computers that can add up together.
since tvm can compile model for multiple hardware and support for popular 
libraries, it would be good solution to solve this by splitting the original 
model into small pieces and run it across different machines connected through 
network.
walking through tvm source code and documentation, I found a discussion titled 
[[RFC] Compute graph pipeline with new subgraph 
executor](https://discuss.tvm.apache.org/t/rfc-compute-graph-pipeline-with-new-subgraph-executor/9839)
  which I think is pretty close to what I want but corresponding RFC was 
outdated
if I'm not mistaken, contributed code was moved to 
https://discuss.tvm.apache.org/t/rfc-compute-graph-pipeline-with-new-subgraph-executor/9839)
 but I can't make it work either





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/can-tvm-run-split-inference-and-split-training-strategies/18595/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/8ae603a4efaa0458a937763329dd7c1d315ea232bc3b8f4960e9474a94a16b60).

Reply via email to