josephevans commented on a change in pull request #20269:
URL: https://github.com/apache/incubator-mxnet/pull/20269#discussion_r632160369



##########
File path: docs/python_docs/python/tutorials/deploy/export/onnx.md
##########
@@ -108,28 +106,27 @@ export_model(sym, params, in_shapes=None, in_types=<class 
'numpy.float32'>, onnx
     This method is available when you ``import mxnet.onnx``
 ```
 
-`export_model` API can accept the MXNet model in one of the following ways.
+The `export_model` API can accept a MXNet model in one of the following ways.
 
 1. MXNet's exported json and params files:
     * This is useful if we have pre-trained models and we want to convert them 
to ONNX format.
 2. MXNet sym, params objects:
-    * This is useful if we are training a model. At the end of training, we 
just need to invoke the `export_model` function and provide sym and params 
objects as inputs with other attributes to save the model in ONNX format. The 
params can be either a single object that contains both argument and auxiliary 
parameters, or a list that includes arg_parmas and aux_params objects
+    * This is useful if we are training a model. At the end of training, we 
just need to invoke the `export_model` function and provide the sym and params 
objects as inputs to save the model in ONNX format. The params can be either a 
single object that contains both argument and auxiliary parameters, or a list 
that includes arg_parmas and aux_params objects
 
+Since we have downloaded pre-trained model files, we will use the 
`export_model` API by passing in the paths of the symbol and params files.
 
-Since we have downloaded pre-trained model files, we will use the 
`export_model` API by passing the path for symbol and params files.
+## Use mx2onnx to eport the model

Review comment:
       spelling of export

##########
File path: python/mxnet/onnx/README.md
##########
@@ -17,25 +17,25 @@
 # ONNX Export Support for MXNet
 
 ### Overview
-[ONNX](https://onnx.ai/), or Open Neural Network Exchange, is an open source 
deep learning model format that acts as a framework neutral graph 
representation between DL frameworks or between training and inference. With 
the ability to export models to the ONNX format, MXNet users can enjoy faster 
inference and a wider range of deployment device choices, including edge and 
mobile devices where MXNet installation may be constrained. Popular 
hardware-accelerated and/or cross-platform ONNX runtime frameworks include 
Nvidia [TensorRT](https://github.com/onnx/onnx-tensorrt), Microsoft 
[ONNXRuntime](https://github.com/microsoft/onnxruntime), Apple 
[CoreML](https://github.com/onnx/onnx-coreml) and 
[TVM](https://tvm.apache.org/docs/tutorials/frontend/from_onnx.html), etc. 
+[ONNX](https://onnx.ai/), or Open Neural Network Exchange, is an open source 
deep learning model format that acts as a framework neutral graph 
representation between DL frameworks or between training and inference. With 
the ability to export models to the ONNX format, MXNet users can enjoy faster 
inference and a wider range of deployment device choices, including edge and 
mobile devices where MXNet installation may be constrained. Popular 
hardware-accelerated and/or cross-platform ONNX runtime frameworks include 
Nvidia [TensorRT](https://github.com/onnx/onnx-tensorrt), Microsoft 
[ONNXRuntime](https://github.com/microsoft/onnxruntime), Apple 
[CoreML](https://github.com/onnx/onnx-coreml), etc.
 
 ### ONNX Versions Supported
-ONNX 1.7 -- Fully Supported
-ONNX 1.8 -- Work in Progress
+ONNX 1.7 & 1.8
 
 ### Installation
-From the 1.9 release and on, the ONNX export module has become an offical, 
built-in module in MXNet. You can access the module at `mxnet.onnx`. 
+From MXNet 1.9 release and on, the ONNX export module has become an offical, 
built-in feature in MXNet. You can access the module at `mxnet.onnx`.
 
-If you are a user of earlier MXNet versions and do not want to upgrade MXNet, 
you can still enjoy the latest ONNX suppor by pulling the MXNet source code and 
building the wheel for only the mx2onnx module. Just do `cd python/mxnet/onnx` 
and then build the wheel with `python3 -m build`. You should be able to find 
the wheel under `python/mxnet/onnx/dist/mx2onnx-0.0.0-py3-none-any.whl` and 
install it with `pip install mx2onnx-0.0.0-py3-none-any.whl`. You should be 
able to access the module with `import mx2onnx` then.
+If you are a user of earlier MXNet versions and do not want to upgrade MXNet, 
you can still enjoy the latest ONNX suppor by pulling the MXNet source code and 
building the wheel for only the mx2onnx module. Just do `cd python/mxnet/onnx` 
and then build the wheel with `python3 -m build`. You should be able to find 
the wheel under `python/mxnet/onnx/dist/mx2onnx-0.0.0-py3-none-any.whl` and 
install it with `pip install mx2onnx-0.0.0-py3-none-any.whl`. You should can 
then access the module with `import mx2onnx`. The `mx2onnx` namespace is 
equivalent to `mxnet.onnx`.

Review comment:
       spelling: support
    
   "You should can then access" should be "You can then access".

##########
File path: docs/python_docs/python/tutorials/deploy/export/onnx.md
##########
@@ -139,36 +136,38 @@ We have defined the input parameters required for the 
`export_model` API. Now, w
 
 ```python
 # Invoke export model API. It returns path of the converted onnx model
-converted_model_path = mx.onnx.export_model(sym, params, input_shape, 
input_dtypes, onnx_file)
+converted_model_path = mx.onnx.export_model(sym, params, in_shapes, in_types, 
onnx_file)
 ```
 
-This API returns path of the converted model which you can later use to import 
the model into other frameworks. Please refer to 
[mx2onnx](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#apis)
 for more details about the API.
+This API returns the path of the converted model which you can later use to 
run inference with or import the model into other frameworks. Please refer to 
[mx2onnx](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#apis)
 for more details about the API.
 
-### Dynamic Shape Input
-MXNet to ONNX export also supports dynamic input shapes. By setting up 
optional flags in `export_model`, users have the control of partially/fully 
dynamic shape input export. For example, setting the batch dimension to dynamic 
enables dynamic batching inference; setting the width and height dimension to 
dynamic allows inference on images with different shapes. Below is a code 
example for dynamic shape on batch dimension. The flag `dynamic` is set to 
switch on dynamic shape input export, and `dynamic_input_shapes` is used to 
specify which dimensions are dynamic. `None` or any string variable can be used 
to represent a dynamic shape dimension.
+## Dynamic input shapes
+The mx2onnx module also supports dynamic input shapes. We can set 
`dynamic=True` to turn it on. Note that even with dynamic shapes, a set of 
static input shapes still need to be specified in `in_shapes`; on top of that, 
we'll also need to specify which dimensions of the input shapes are dynamic in 
`dynamic_input_shapes`. We can simply set the dynamic dimensions as `None`, 
e.g. `(1, 3, None, None)`, or use strings in place of the `None`'s for better 
understandability in the exported onnx graph, e.g. `(1, 3, 'Height', 'Width')`
 
 ```python
 # The first input dimension will be dynamic in this case
 dynamic_input_shapes = [(None, 3, 224, 224)]
-mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file,
-                     dynamic=True, dynamic_input_shapes=dynamic_input_shapes)
+converted_model_path = mx.onnx.export_model(sym, params, in_shapes, in_types, 
onnx_file,
+                                            dynamic=True, 
dynamic_input_shapes=dynamic_input_shapes)
 ```
 
-## Check validity of ONNX model
+## Validate the exported ONNX model
 
-Now we can check validity of the converted ONNX model by using ONNX checker 
tool. The tool will validate the model by checking if the content contains 
valid protobuf:
+Now that we have the converted model, we can validate its correctness with the 
ONNX checker tool.
 
 ```python
 from onnx import checker
 import onnx
 
-# Load onnx model
+# Load the ONNX model
 model_proto = onnx.load_model(converted_model_path)
 
-# Check if converted ONNX protobuf is valid
+# Check if the converted ONNX protobuf is valid
 checker.check_graph(model_proto.graph)
 ```
 
-If the converted protobuf format doesn't qualify to ONNX proto specifications, 
the checker will throw errors, but in this case it successfully passes. 
+Now that the model passes the check (hopefully :)), we can run it with 
inference frameworks or import it into other deep learning frameworks!
+
+## Simplify the exported ONNX model
 
-This method confirms exported model protobuf is valid. Now, the model is ready 
to be imported in other frameworks for inference! Users may consider to further 
optimize the ONNX model file using various tools such as 
[onnx-simplifier](https://github.com/daquexian/onnx-simplifier).
+Okay, we already have the exporeted ONNX model now, but it may not be the end 
of the story. Due to differences in MXNet's and ONNX's operator specifications, 
sometimes helper operartors/nodes will need to be created to help construct the 
ONNX graph from the MXNet blueprint. In that sense, we recommend our users to 
checkout [onnx-simplifier](https://github.com/daquexian/onnx-simplifier), which 
can greatly simply the exported ONNX model by techniques such as constant 
folding, operator fussion and more.

Review comment:
       spelling: exported, operators, simplify, fusion

##########
File path: python/mxnet/onnx/README.md
##########
@@ -70,19 +70,22 @@ Returns:
         Onnx file path
 
 #### Model with Multiple Input
-When the model has multiple input, all the input shapes and dtypes should be 
provided with `in_shapes` and `in_dtypes`. Note that the shape/dtype in 
`in_shapes`/`in_dtypes` must follow the same order as in the MXNet model symbol 
file. If `in_dtypes` is provided as a single data type, the type will be 
applied to all input nodes.
+When the model has multiple input, all the input shapes and dtypes must be 
provided with `in_shapes` and `in_dtypes`. Note that the shape/dtype in 
`in_shapes`/`in_dtypes` must follow the same order as in the MXNet model symbol 
file. If `in_dtypes` is provided as a single data type, then that type will be 
applied to all input nodes.

Review comment:
       "has multiple input**s**"




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to