joddiy opened a new issue #706:
URL: https://github.com/apache/singa/issues/706
Hi, all, during refactoring SONNX, I found the following issues:
# Input
ONNX prefers to use tensors as input instead of attributes, which may incurs
some issues when we create SINGA operators(or layers). There are two cases:
1. SINGA params <- ONNX Initializer
The params of an operator come from the ONNX Initializer(pre-stored
weights). This part is ok now.
2. **SINGA params <- ONNX operator, dynamic graph**
For some operators of ONNX(OneHot, Tile, Gather, Reshape, Slice, Clip).
Some attributes of these operators, they come from other operators' outputs. We
cannot handle this case.
For example, in BERT, for this Reshape operator, its shape comes from the
previous operator:

# Layers
for @dcslin
## BatchNorm2d
- remove num_features
- self.allow_params = ["scale", "bias", "running_mean", "running_var"]
## Conv2d
- remove in_channels, out_channels
## Gemm
In some model, the developer prefers gemm instead of linear, so we need to
add gemm to Layer,
# Metaclass
I've checked the metaclass carefully, but It seems I cannot use the
metaclass to modify the forward function in this case. The case is, I have a
graph written by ONNX, I need to write a forward by using SINGA's operator. In
this case, I can call the SINGA's operator by the graph, but I cannot write a
forward function automatically from the graph.
This more like the `exec` function.
for example, I have a graph like this:
```
graph = {
"op1" : {"inputs":["a1"], outputs:["a2"]},
"op2" : {"inputs":["a2"], outputs:["a3"]},
}
# what I can do
def forward(x):
tensors = []
for op, op_info in graph.items():
inputs = [tensors[inp] for inp in op_info.inputs]
outputs = op()
for (outp, val) in zip(op_info.outputs, outputs):
tensors[outp] = val
# what I cannot do by metaclass but can with exec
program = parse_graph_to_str(graph)
# 'a2=op1(a1)\na3=op2(a2)'
exec(program)
```
So, the above forward is my current implementation.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]