very happy you are doing this Roshani!

On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote <roshaninagmo...@gmail.com>
wrote:

> Hi guys,
>
>
> I am working on supporting ONNX <https://github.com/onnx/onnx> pre-trained
> models in Apache MXNet and would like to seek your opinion on the choice of
> implementation. I also have created a GitHub issue
> <https://github.com/apache/incubator-mxnet/issues/8319>. Supporting ONNX
> in
> MXNet will enable users to move between frameworks with their models, this
> will also enable MXNet project to be a part of the ONNX open standard and
> steer the direction of ONNX.
>
>
> For those who don’t know ONNX, ONNX is an open source format for AI models
> which enables models to be transferred between frameworks. Refer to
> https://github.com/onnx/onnx for more details.
>
>
> To implement the import/export functionality in MXNet, I propose to expose
> a MXNet python module “serde”(name taken from Apache Hive project) with the
> following methods supporting different formats:
>
> sym, params = mxnet.serde.import(other_format_file, other_format=‘onnx’)
>
> other_format_file =  mxnet.serde.export(mxnet_sym, mxnet_params, ‘onnx’)
>
>
> The implementation under the hood can be done in two ways:
>
>
> 1) Implement at the MXNet layer by parsing the ONNX model(in protobuf
> format) and turn into MXNet Symbolic operators and build MXNet model
> directly. Similarly, I can convert the MXNet model to ONNX format at this
> layer.
>
>
> 2) The DMLC community has released the nnvm/tvm complier and an
> intermediate representation of the models, refer:
> http://www.tvmlang.org/2017/10/06/nnvm/tvm-compiler-announcement.html
> <http://www.tvmlang.org/2017/10/06/nnvm-compiler-announcement.html>
>
> Based on the conversation on the GitHub issue
> <https://github.com/apache/incubator-mxnet/issues/8319> I opened, Mu
> mentioned that MXNet would use nnvm/tvm as the backend in the future.
>
>
> We could hook into this layer to implement the import/export functionality.
> nnvm/tvm has ONNX 0.1 version import implemented.
>
> For import,
>
>    1.
>
>    I will need to enhance nnvm/tvm’s importer to support ONNX 0.2
>    2.
>
>    Implement nnvm/tvm->mxnet symbolic operators.
>
> For export:
>
>
>    1.
>
>    mxnet->nnvm/tvm ( nnvm/tvm provides this implementation already)
>    2.
>
>    I will need to Implement nnvm/tvm>onnx.
>
>
> These are the pros and cons I see in the above approaches:
>
>    1.
>
>    Import/export at mxnet layer
>
> Pros:
>
>    1.
>
>    Stable APIs currently used by users.
>    2.
>
>    Larger Apache MXNet community of contributors.
>    3.
>
>    CI pipeline to catch bugs.
>    4.
>
>    Comparatively less time to implement and put it in the hands of the
>    users.
>
> Cons:
>
>    1.
>
>    In the future we may have to reimplement at the nnvm/tvm layer, in case
>    MXNet moves to the nnvm/tvm backend(assuming it will move).
>
>
>
>    1.
>
>    Import/export at nnvm/tvm layer
>
> Pros:
>
>    1.
>
>    Less engineering work in case mxnet moves to nnvm/tvm
>    2.
>
>    nnvm/tvm would become a hub to convert to different formats.
>    3.
>
>    nnvm operators are more in parity with mxnet’s gluon APIs this could be
>    useful in case Gluon becomes the only standard that MXNet will support.
>
> Cons:
>
>    1.
>
>    Nascent project with few contributors
>    2.
>
>    Does not support all operators that exist in MXNet Symbolic API
>    3.
>
>    No CI Pipeline
>    4.
>
>    Current Apache MXNet project does not use nnvm/tvm backend
>    5.
>
>    mxnet->nnvm/tvm backend needs more testing and user feedback.
>
>
> Any suggestions on both of these approaches? From user's perspective, this
> will be an implementation detail that is not exposed.
>
> Thanks,
>
> Roshani
>



-- 


Dominic Divakaruni
206.475.9200 Cell

Reply via email to