billishyahao opened a new pull request, #11508:
URL: https://github.com/apache/tvm/pull/11508
This patch is to enable layer normalization in DNNL BYOC by providing a
out-of-box rewrite pattern for combine the operators into single relay layer
normalization operators as well as its implementation in dnnl json codegen.
After applying the rewrite pattern, we will observe the following dnnl
function:
```
6202 def @tvmgen_default_dnnl_main_108(%dnnl_108_i0: Tensor[(1, 784, 128),
float32], Inline=1, Compiler="dnnl",
global_symbol="tvmgen_default_dnnl_main_108", Primitive=1) -> Tensor[(1, 784,
128), float32] {
6203 nn.layer_norm(%dnnl_108_i0, meta[relay.Constant][56] /*
ty=Tensor[(128), float32] */, meta[relay.Constant][57] /* ty=Tensor[(128),
float32] */) /* ty=Tensor[(1, 784, 128), float32] */
6204 }
```
Once you enable DNNL_VERBOSE flag, more informations are shown in log file
as below:
```
onednn_verbose,exec,cpu,layer_normalization,simple_layer_normalization:any,forward_inference,data_f32::blocked:abc:f0
stats_undef::undef::f0 diff_undef::undef::f0,,flags:CH,1x784x128,0.0551758
```
Thanks for contributing to TVM! Please refer to guideline
https://tvm.apache.org/docs/contribute/ for useful information and tips. After
the pull request is submitted, please request code reviews from
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
by @ them in the pull request thread.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]