nverke opened a new pull request, #12667: URL: https://github.com/apache/tvm/pull/12667
Results from running a vrmpy operator that loads data onto vtcm before running. Vec - Unroll(n // 8) and Vectorize(n // 64) Para - Parallel(n // 4) Pre - Preallocated VTCM buffers Total Vrmpy Operations | Total Transfer (MB) | Without VTCM (Gops) | Basic VTCM Loads (Gops) | Vec Loads (Gops) | Vec + Para Loads (Gops) | Pre + Vec Loads (Gops) | Pre + Vec + Par Loads (Gops) | Single DMA Load (Gops) | Preloaded (Gops) -- | -- | -- | -- | -- | -- | -- | -- | -- | -- 1024 | 0.39 | 95.0256 | 0.345 | 0.5408 | 0.5905 | 44.4814 | 32.8886 | 15.0813 | 124.7117 2048 | 0.79 | 124.2389 | 0.4002 | 0.7063 | 0.8826 | 43.5238 | 47.2871 | 16.1339 | 209.2688 4096 | 1.57 | 41.5497 | 0.4215 | 0.8664 | 1.1977 | 10.9374 | 26.5749 | 18.1754 | 241.1628 10240 | 3.93 | 33.2139 | 0.4419 | 1.0506 | 1.7311 | 11.7886 | 34.0405 | 25.4214 | 370.2948 16384 | 6.29 | 20.7683 | 0.4195 | 1.0568 | 1.898 | 7.7292 | 22.5898 | 29.7011 | 397.4137 20480 | 7.86 | 20.2128 | 0.4406 | 1.069 | 1.9779 | 6.6829 | 17.7941 | 25.4929 | 338.294 Results from copying data to vtcm with various strategies (Factors were Unroll(n // 2), Vectorize(n // 128), Parallel(n // 4)) Total Transfer (MB) | Base (GBps) | Unroll + Vectorize (GBps) | Unroll + Vectorize + Parallel (GBps) | Single DMA (GBps) -- | -- | -- | -- | -- 0.01 | 2.2122 | 15.9211 | 4.8287 | 2.2524 0.02 | 2.3207 | 26.1998 | 9.5082 | 4.6669 0.04 | 2.4425 | 38.1089 | 17.5147 | 6.4492 0.08 | 2.5067 | 48.5949 | 32.507 | 9.1469 0.16 | 2.5507 | 57.6021 | 55.1855 | 11.1598 0.31 | 2.7053 | 62.8063 | 83.4726 | 15.2878 0.62 | 2.9199 | 74.3696 | 114.7925 | 17.6438 1 | 2.2645 | 49.8653 | 63.8026 | 18.8814 2 | 1.1232 | 10.3933 | 29.1977 | 20.6719 4 | 1.0683 | 9.6105 | 26.5143 | 25.201 8 | 0.6814 | 6.1916 | 24.049 | 26.1883 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
