cee1 commented on pull request #10650:
URL: https://github.com/apache/tvm/pull/10650#issuecomment-1072250510
Summary:
For a qnn.conv, a combination of "conv + ... + requantize" will reduce
output size by three-quarters(dtype=int32 vs dtype=int8), which is an essential
factor for performance tuning, in our case.
Thus, performing AutoTvm on subgraph granularity is important here.
This patch try to add a feature of subgraph-level AutoTvm.
- A `GLOBAL_SCOPE.tune_subgraph` option is introduced.
<br/>
The major change is creating tuning tasks _on subgraph granularity_ in
`ScheduleBuilder`, via `register_subgraph_task(...)` and
`register_topi_subgraph (...)`
A series of questions need to be handled then:
- To `export_library(...)`, HOWTO hit the tuning log entry of the subgraph?
- The same subgraph should be generated (tuning time vs export time), i.e.
[the same relay passes should be
applied](https://github.com/apache/tvm/pull/10650/files#diff-39fc81d81b494a431f3a37d66e6e556c28c71be4b6adcc856c66f2a4cc07bf5dR57)
- [The same subgraph name should be
generated](https://github.com/apache/tvm/pull/10650/files#diff-39fc81d81b494a431f3a37d66e6e556c28c71be4b6adcc856c66f2a4cc07bf5dR215)
- For `register_topi_subgraph (...)`, which will register a wrapper
_fcompute_
- HOWTO locate the _fcompute_ of the anchor Op? Especially for a
_fcompute_ is a wrapper of another _fcompute_ (see
[`conv2d_nchw.x86`](https://github.com/apache/tvm/blob/main/python/tvm/relay/op/strategy/x86.py#L134)
and
[`topi.x86.conv2d_nchw`](https://github.com/apache/tvm/blob/main/python/tvm/topi/x86/conv2d.py#L127)
)
- HOWTO figure out args, specifically, __input args__ of a subgraph, in
`TaskTemplate.__call__`
There are some known limitations of current patch:
1. In the case of more than one impl added for the anchor Op
2. In the case of _fcompute_ accessing config
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]