This roadmap for TVM v0.6. TVM is a community-driven project and we love your feedback and proposals on where we should be heading. Please open up discussion in the discussion forum as well as bring RFCs.
- Feel free to volunteer yourself if you are interested in trying out some items(they do not have to be on the list). - Please also check out the [help wanted](https://github.com/apache/incubator-tvm/issues?q=is%3Aissue+is%3Aopen+label%3A%22status%3A+help+wanted%22) list in the github issues on things that need help In the 0.7 cycle, we are going to focus on the following four areas. Which are summarized summarized from the [forum discussion](https://discuss.tvm.ai/t/rfc-discuss-tvm-v0-7-roadmap/5159). We also welcome contributions along all the other areas, including more operator and model coverages, and they will be added to this list gradually. Please reply to this thread about what you would like to volunteer to work on. ### Core Infra - [ ] Unified IR refactor - [ ] Unified runtime for heterogeneous execution - [ ] Enhanced support for high-level graph rewriting for accelerators - [ ] Improving test and benchmark infrastructure. - [ ] Testing and benchmarking on remote targets. ### Usability - [ ] Better documentations for developments - [ ] Command line utilties to use TVM as an ahead of time compiler - [ ] Visualization of Relay Graphs ### Backend and runtime - [ ] End to end uTVM - [ ] More dynamic model support - [ ] Complete VM functionality - [ ] Improve VM performance - [ ] Add tutorial for VM - [ ] External code generator - [ ] End to end inference with Chisel VTA - [ ] CUDA half2 data type support - [ ] bfloat16 support - [ ] 4-bit model support ### Automation - [ ] More auto scheduling - [ ] Better loop partitioning - [ ] Reduce AutoTVM tuning time - [ ] Auto tensorization - [ ] Auto quantization -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4845