Lebron8997 opened a new issue #13570: [Feature Request] Support ONNX export of Deconvolution operator URL: https://github.com/apache/incubator-mxnet/issues/13570 Currently, MXNet->ONNX model export does not support the MXNet operator: - **Deconvolution** The following code (adapted from https://github.com/NVIDIA/mxnet_to_onnx/blob/master/mx2onnx_converter/mx2onnx_converter_functions.py) may be inserted in [mxnet-ROOT]/python/mxnet/contrib/onnx/mx2onnx/_op_translations.py @mx_op.register("Deconvolution") def convert_deconvolution(node, **kwargs): """Map MXNet's deconvolution operator attributes to onnx's ConvTranspose operator and return the created node. """ onnx = import_onnx_modules() name = node["name"] inputs = node["inputs"] num_inputs = len(inputs) proc_nodes = kwargs["proc_nodes"] input_node = proc_nodes[kwargs["index_lookup"][inputs[0][0]]].name weights_node = proc_nodes[kwargs["index_lookup"][inputs[1][0]]].name if num_inputs > 2: bias_node = proc_nodes[kwargs["index_lookup"][inputs[2][0]]].name attrs = node.get("attrs") kernel_dims = list(parse_helper(attrs, "kernel")) stride_dims = list(parse_helper(attrs, "stride", [1, 1])) pad_dims = list(parse_helper(attrs, "pad", [0, 0])) num_group = int(attrs.get("num_group", 1)) # dilations = list(parse_helper(attrs, "dilate", [1, 1])) pad_dims = pad_dims + pad_dims input_nodes = [input_node, weights_node] if num_inputs > 2: input_nodes.append(bias_node) deconv_node = onnx.helper.make_node( "ConvTranspose", inputs=input_nodes, outputs=[name], kernel_shape=kernel_dims, strides=stride_dims, pads=pad_dims, group=num_group, name=name ) return [deconv_node]
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
