csullivan opened a new pull request #7687:
URL: https://github.com/apache/tvm/pull/7687
This PR introduces the `opencl --device=adreno` target and corresponding
relay strategies. The conv2d schedules introduced here utilize spatial and
channel-wise packing for the weights (OIHW4o) and activations (NCHW4c),
respectively, both with vector length 4 to support lowering to RGBA texture
memory.
**AutoTVM support**
- AutoTVM doesn't currently capture the runtime context of extracted tasks.
Without capturing information about the runtime buffer scopes, the
codegeneration during tuning will only occur on flat buffers (not texture). For
now, we utilize a cache_read("texture") stage when tuning to explore the
performance benefit of utilizing texture memory. We believe that with a
sufficient number of iteration's per trial, the copy time to texture prior to
running the main compute kernel (which results from a cache_read) should be
constant over the search and therefore not greatly impact the tuning results.
- Note that the cache_read is not needed when using the graph_runtime which
supports passing in external texture buffers (see: PR#). Therefore, in the
schedules one will observe `if autotvm.GLOBAL_SCOPE.in_tuning:` which around
scheduling related to adding a cache_read to texture stage.
- The schedules can be simplified once either 1) AutoTVM tuning supports
capturing this runtime information during task extraction (thereby removing the
need for a cache_read to texture) or 2) once texture lowering in
tir.TextureFlatten fully supports cache_read cancellation (external buffer
forwarding through the cache_read). Cancellation currently is supported except
for the cases of padding wherein an extra texture to texture copy results.
RFC in progress, once posted I will add a link here.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]