Further, I also print the content of in_node_entry
```
keys: ['node', 'inputs', 'types', 'op', 'name']
node:
free_var %input.1: Tensor[(1, 3, 256, 320), float32]
free_var %encoder.level1.conv.weight: Tensor[(12, 3, 3, 3), float32]
%0 = nn.conv2d(%input.1, %encoder.level1.conv.weight, strides=[2, 2], 
padding=[1, 1, 1, 1], kernel_size=[3, 3]) /* ty=Tensor[(1, 12, 128, 160), 
float32] */;
free_var %encoder.level1.bn.weight: Tensor[(12), float32]
free_var %encoder.level1.bn.bias: Tensor[(12), float32]
free_var %encoder.level1.bn.running_mean: Tensor[(12), float32]
free_var %encoder.level1.bn.running_var: Tensor[(12), float32]
%1 = nn.batch_norm(%0, %encoder.level1.bn.weight, %encoder.level1.bn.bias, 
%encoder.level1.bn.running_mean, %encoder.level1.bn.running_var, 
epsilon=0.001f) /* ty=(Tensor[(1, 12, 128, 160), float32], Tensor[(12), 
float32], Tensor[(12), float32]) */;
%2 = %1.0;
free_var %v953: Tensor[(12, 1, 1), float32]
%3 = reshape(%v953, meta[relay.Constant][0] /* ty=Tensor[(1), int32] */ /* 
ty=Tensor[(1), int32] */, newshape=[-1]) /* ty=Tensor[(12), float32] */;
%4 = nn.prelu(%2, %3) /* ty=Tensor[(1, 12, 128, 160), float32] */;
free_var %encoder.level2_0.conv.0.weight: Tensor[(12, 1, 3, 3), float32]
%5 = nn.conv2d(%4, %encoder.level2_0.conv.0.weight, strides=[2, 2], padding=[1, 
1, 1, 1], groups=12, kernel_size=[3, 3]) /* ty=Tensor[(1, 12, 64, 80), float32] 
*/;
%6 = nn.pad(%5, pad_width=[[0, 0], [0, 0], [0, 0], [0, 0]]) /* ty=Tensor[(1, 
12, 64, 80), float32] */;
%7 = nn.avg_pool2d(%6, pool_size=[64, 80], strides=[64, 80], padding=[0, 0, 0, 
0]) /* ty=Tensor[(1, 12, 1, 1), float32] */;
%8 = nn.batch_flatten(%7) /* ty=Tensor[(1, 12), float32] */;
%9 = nn.batch_flatten(%8) /* ty=Tensor[(1, 12), float32] */;
%10 = multiply(1f /* ty=float32 */, %9) /* ty=Tensor[(1, 12), float32] */;
free_var %encoder.level2_0.conv.1.dense.0.weight: Tensor[(12, 12), float32]
%11 = nn.dense(%10, %encoder.level2_0.conv.1.dense.0.weight, units=12) /* 
ty=Tensor[(1, 12), float32] */;
free_var %encoder.level2_0.conv.1.dense.0.bias: Tensor[(12), float32]
%12 = multiply(1f /* ty=float32 */, %encoder.level2_0.conv.1.dense.0.bias) /* 
ty=Tensor[(12), float32] */;
%13 = nn.bias_add(%11, %12) /* ty=Tensor[(1, 12), float32] */;
free_var %encoder.level2_0.conv.1.dense.1.weight: Tensor[(12), float32]
%14 = nn.prelu(%13, %encoder.level2_0.conv.1.dense.1.weight) /* ty=Tensor[(1, 
12), float32] */;
%15 = reshape(%14, meta[relay.Constant][1] /* ty=Tensor[(4), int32] */ /* 
ty=Tensor[(4), int32] */, newshape=[1, 12, 1, 1]) /* ty=Tensor[(1, 12, 1, 1), 
float32] */;
multiply(%15, %5) /* ty=Tensor[(1, 12, 64, 80), float32] */
// meta data omitted. you can use show_meta_data=True to include meta data
inputs:
[[356, 0, 0], [344, 0, 0]]
types:
[TensorType([1, 12, 64, 80], float32)]
op:
multiply
name:
None

```
compare with right node with keys ['node', 'inputs', 'types', 'op', 'name', 
'topi_op', 'workloads', 'record_candidates'] , the "bug node" only has keys 
['node', 'inputs', 'types', 'op', 'name']

I want to know what is the reason for this error?Or how can i fix it?
I will try to compile the model from pytorch directly now





---
[Visit 
Topic](https://discuss.tvm.ai/t/autotvm-tuning-fails-for-an-onnx-network-on-x86-cpu-in-tune-graph-keyerror-topi-op/6813/3)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/b9a29122a1458a9c65d2947a60336fb6737b4ec0d8fbd5350bcbf7c59966471a).

Reply via email to