My $0.02:

NNVM is not currently an Apache module.  It’s under dmlc umbrella, whose
direction and governance is unclear. For this reason, I am inclined to
support new effort being places in Apache MXNet


-Chris

On Wed, Oct 18, 2017 at 5:19 PM Tianqi Chen <tqc...@cs.washington.edu>
wrote:

> >
> > - “More hardware backends to mxnet” – MXNet users get the same benefit of
> > HW support implementing ONNX import on top of MXNet symbolic, right?
> >
>
> The support of nnvm compiler compilation comes directly from going into
> nnvm/top. This include supporting interesting operators onnx do not yet
> support(e.g. broadcast arithmetics) and real compilation pipeline to code.
>
>
> > - “NNVM Compiler now received contributions from AWS, UW and many other
> > folks in MXNet community.” – agreed it is ramping up, but when you look
> at
> > the data, it is clear that it is very early on for NNVM. Looking at the
> > repo, it has overall 223 commits, 0 releases. Compare it to MXNet with
> 6136
> > commits and 32 releases. It seems to be still early on for NNVM, and for
> a
> > more reliable initial implementation building the import on top of MXNet
> is
> > easier, faster and safer. MXNet has lots of users already using the
> > Symbolic API which hopefully mean that is a mature API that is not likely
> > to have breaking changes or major issues.
> >
>
> One major reason that NNVM itself get less commit, is because it learns
> already a lot of lessons from pains we had when building MXNet. Note that
> the MXNet's symbolic API itself is built on top of NNVM for more than a
> year now.
>
> The only difference between mxnet's current symbolic API and nnvm/top 's
> API is:
> - MXNet's API contains legacy issues due to backward compatibility, we
> might consider deprecate some of them.
> - nnvm/top operators do not suffer from legacy issues and strictly follows
> convention of numpy and Gluon.
> - In that sense, actually nnvm/top's symbolic API is cleaner and more
> stable, and is the final form we want to migrate into.
>
> Tianqi
>
>
> > On 10/18/17, 14:13, "Tianqi Chen" <workc...@gmail.com on behalf of
> > tqc...@cs.washington.edu> wrote:
> >
> >     I am strongly recommending going through the nnvm/top. One major
> > reason in
> >     here is that the support of nnvm/top layer NOT ONLY mean
> compatibility
> > of
> >     model format with onnx. These are the major benefits:
> >
> >
> >     - More hardware backends to mxnet, including opencl, metal, Raspberry
> > Pi,
> >     web browser. These things are automatically enabled by going through
> > this
> >     layer. In general, we design nnvm/tvm stack to resolve the challenge
> of
> >     current mxnet's weakness in terms deploying to more hardware
> backends.
> >
> >     - More frontend capabilities, nnvm's gluon style IR ingests now from
> >     CoreML, ONNX and in future keras. Supporting those will reduce the
> > amount
> >     of engineering effort needed.
> >
> >     - Future compatibility. We all agree that the future being migrated
> to
> >     gluon's API. NNVM/top tries to look ahead by directly adopting the
> > symbolic
> >     API to be gluon.
> >
> >
> >     I would also like to correct some of the mentioned facts with regard
> to
> >     nnvm/tvm stack
> >
> >     1.   Nascent project with few contributors
> >
> >     NNVM Compiler now received contributions from AWS, UW and many other
> > folks
> >     in MXNet community. NNVM itself is already being used by MXNet.
> >     MXNet's internal IR is migrating toward gluon, and its final form
> being
> >     nnvm/top
> >
> >     3.   Does not support all operators that exist in MXNet Symbolic API
> >
> >     Neither NNVM/top or onnx support all operators that exist in mxnet
> > symbolic
> >     API. The end goal here is mainly to make nnvm/top onnx compatible,
> > which is
> >     a more reasonable goal.
> >
> >     4.  No CI Pipeline and testcases
> >
> >     NNVM already contains a compiler contains unittests and ci tested
> with
> >     integration  https://github.com/dmlc/nnvm, with a CI pipline that is
> > well
> >     tested on CPU and GPU cases for front-ends.
> >
> >     Tianqi
> >
> >
> >     On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote <
> > roshaninagmo...@gmail.com>
> >     wrote:
> >
> >     > Hi guys,
> >     >
> >     >
> >     > I am working on supporting ONNX <https://github.com/onnx/onnx>
> > pre-trained
> >     > models in Apache MXNet and would like to seek your opinion on the
> > choice of
> >     > implementation. I also have created a GitHub issue
> >     > <https://github.com/apache/incubator-mxnet/issues/8319>.
> Supporting
> > ONNX
> >     > in
> >     > MXNet will enable users to move between frameworks with their
> > models, this
> >     > will also enable MXNet project to be a part of the ONNX open
> > standard and
> >     > steer the direction of ONNX.
> >     >
> >     >
> >     > For those who don’t know ONNX, ONNX is an open source format for AI
> > models
> >     > which enables models to be transferred between frameworks. Refer to
> >     > https://github.com/onnx/onnx for more details.
> >     >
> >     >
> >     > To implement the import/export functionality in MXNet, I propose to
> > expose
> >     > a MXNet python module “serde”(name taken from Apache Hive project)
> > with the
> >     > following methods supporting different formats:
> >     >
> >     > sym, params = mxnet.serde.import(other_format_file,
> > other_format=‘onnx’)
> >     >
> >     > other_format_file =  mxnet.serde.export(mxnet_sym, mxnet_params,
> > ‘onnx’)
> >     >
> >     >
> >     > The implementation under the hood can be done in two ways:
> >     >
> >     >
> >     > 1) Implement at the MXNet layer by parsing the ONNX model(in
> protobuf
> >     > format) and turn into MXNet Symbolic operators and build MXNet
> model
> >     > directly. Similarly, I can convert the MXNet model to ONNX format
> at
> > this
> >     > layer.
> >     >
> >     >
> >     > 2) The DMLC community has released the nnvm/tvm complier and an
> >     > intermediate representation of the models, refer:
> >     > http://www.tvmlang.org/2017/10/06/nnvm/tvm-compiler-announce
> > ment.html
> >     > <http://www.tvmlang.org/2017/10/06/nnvm-compiler-announcement.html
> >
> >     >
> >     > Based on the conversation on the GitHub issue
> >     > <https://github.com/apache/incubator-mxnet/issues/8319> I opened,
> Mu
> >     > mentioned that MXNet would use nnvm/tvm as the backend in the
> future.
> >     >
> >     >
> >     > We could hook into this layer to implement the import/export
> > functionality.
> >     > nnvm/tvm has ONNX 0.1 version import implemented.
> >     >
> >     > For import,
> >     >
> >     >    1.
> >     >
> >     >    I will need to enhance nnvm/tvm’s importer to support ONNX 0.2
> >     >    2.
> >     >
> >     >    Implement nnvm/tvm->mxnet symbolic operators.
> >     >
> >     > For export:
> >     >
> >     >
> >     >    1.
> >     >
> >     >    mxnet->nnvm/tvm ( nnvm/tvm provides this implementation already)
> >     >    2.
> >     >
> >     >    I will need to Implement nnvm/tvm>onnx.
> >     >
> >     >
> >     > These are the pros and cons I see in the above approaches:
> >     >
> >     >    1.
> >     >
> >     >    Import/export at mxnet layer
> >     >
> >     > Pros:
> >     >
> >     >    1.
> >     >
> >     >    Stable APIs currently used by users.
> >     >    2.
> >     >
> >     >    Larger Apache MXNet community of contributors.
> >     >    3.
> >     >
> >     >    CI pipeline to catch bugs.
> >     >    4.
> >     >
> >     >    Comparatively less time to implement and put it in the hands of
> > the
> >     >    users.
> >     >
> >     > Cons:
> >     >
> >     >    1.
> >     >
> >     >    In the future we may have to reimplement at the nnvm/tvm layer,
> > in case
> >     >    MXNet moves to the nnvm/tvm backend(assuming it will move).
> >     >
> >     >
> >     >
> >     >    1.
> >     >
> >     >    Import/export at nnvm/tvm layer
> >     >
> >     > Pros:
> >     >
> >     >    1.
> >     >
> >     >    Less engineering work in case mxnet moves to nnvm/tvm
> >     >    2.
> >     >
> >     >    nnvm/tvm would become a hub to convert to different formats.
> >     >    3.
> >     >
> >     >    nnvm operators are more in parity with mxnet’s gluon APIs this
> > could be
> >     >    useful in case Gluon becomes the only standard that MXNet will
> > support.
> >     >
> >     > Cons:
> >     >
> >     >    1.
> >     >
> >     >    Nascent project with few contributors
> >     >    2.
> >     >
> >     >    Does not support all operators that exist in MXNet Symbolic API
> >     >    3.
> >     >
> >     >    No CI Pipeline
> >     >    4.
> >     >
> >     >    Current Apache MXNet project does not use nnvm/tvm backend
> >     >    5.
> >     >
> >     >    mxnet->nnvm/tvm backend needs more testing and user feedback.
> >     >
> >     >
> >     > Any suggestions on both of these approaches? From user's
> > perspective, this
> >     > will be an implementation detail that is not exposed.
> >     >
> >     > Thanks,
> >     >
> >     > Roshani
> >     >
> >
> >
> >
> >
>

Reply via email to