Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Tianqi Chen
I want to offer one last thing in terms of technical details. I mentioned
two trends in the deep learning systems. There is one last thing that is
omitted. How should we build a good deploy end for deep learning models.

There is always a paradox to this problem:

- On one hand, the deployment end needs to be lightweight and portable.
- We want a lot of optimizations (memory layout compute) and feature
support, this makes the project big.

All the existing systems suffer from this problem. The solution is simple,
separates the optimization part from the actual runtime and compiles the
things down to a bare metal module. And this is the solution nnvm/top
compiler pipeline offer, which I believe will become a standard practice of
deployment and where all systems go to

Tianqi

On Wed, Oct 18, 2017 at 10:03 PM, Tianqi Chen 
wrote:

> OK, there is some miscommunication in here I guess.  We only need to do a
> "canonization" step in python API that goes a symbol to symbol translation
> layer. It can be done in purely in python, and there is no need for going
> "down" into c++ to do this.
>
> For example, the current nnvm.from_mxnet API takes Module or Gluon module
> and get you back nnvm/top graph in python.
>
> All we are asking for is to decomposing it into
>
> def mxnet_to_onnx(module):
>nnvm_graph, params = nnvm_from_mxnet(module)
>onnx = nnvm_to_onnx(nnvm_graph, params)
>return onnx
>
> This allows nnvm_from_mxnet to be reused for other purposes, like
> compiling API to deployable modules
>
> Tianqi
>
> On Wed, Oct 18, 2017 at 9:55 PM, Lupesko, Hagay  wrote:
>
>> Tianqi:
>> Thanks for detailing the trends. I fully agree that ONNX is just a graph
>> serialization format – nothing more, nothing less. I also think we all
>> agree that this simple mechanism holds lots of value to DL users since it
>> allows them to move between frameworks easily (e.g. train with MXNet,
>> deploy on a mobile device with Caffe2, or the other way around).
>> As you said, In Memory IR is different than serialization formats such as
>> ONNX. They are designed to make the runtime execution as efficient as
>> possible, leveraging software and hardware optimizations. They are indeed
>> complex, and where the “meat” is.
>> (BTW ONNX regards itself as an “IR” format, but not in the same sense as
>> NNVM).
>>
>> At the end of the day, Roshani is aiming to deliver a simple
>> functionality to MXNet users: (1) take an ONNX file, and load it into MXNet
>> so you get a graph+weights you can work with (2) Given a trained model,
>> save it as an ONNX file. Since MXNet users do not interact with NNVM
>> directly, but rather interact with MXNet API (MXNet Module), isn’t the
>> simplest thing to do is just to construct the Module “on the fly” using
>> MXNet API? Taking the other approach, we will go from the top level MXNet
>> “load” API, go “down” to NNVM to construct the graph, go back up to MXNet
>> to expose it as a Module. This seems to complex and does not add any
>> benefit. In whatever way we construct the MXNet Module object, NNVM will
>> always be the underlying in memory IR that is being executed, so why not
>> take the simpler route?
>>
>> Hagay
>>
>> On 10/18/17, 19:42, "Tianqi Chen" > tqc...@cs.washington.edu> wrote:
>>
>> Hi Chris:
>>
>> There is no intention to move things away from mxnet. The reduction of
>> lines of code by having a better design in general, and usually, you
>> write
>> less redundant code by benefiting from better design. As I may quote:
>> "the
>> best design is not achieved not when there is nothing to add, but when
>> there is nothing to be taken away."
>>
>> MXNet has always benefited from this philosophy and improves with the
>> new
>> designs and proper modularization. For example, we see such reduction
>> and
>> convenience happening when migrating from MXNet's legacy op to the
>> NNVM's mechanism. The new mechanism now enables things like sparse
>> aware
>> support and other stuff which would be much harder to support.
>>
>> The nnvm/tvm stack comes brings the same benefit(if not more) and it
>> will
>> only add more features to MXNet itself. Offering more hardware
>> backends and
>> optimization, allowing us to write less code and spent less time to
>> optimize for each backend by going through TVM
>>
>> Tianqi
>>
>> On Wed, Oct 18, 2017 at 7:15 PM, Chris Olivier > >
>> wrote:
>>
>> > Reduce code base of mxnet? By increasing scope of the dmlc modules?
>> Is the
>> > intent to make mxnet a thin language wrapper around a group of dmlc
>> > modules?
>> >
>> >
>> > On Wed, Oct 18, 2017 at 6:58 PM Tianqi Chen <
>> tqc...@cs.washington.edu>
>> > wrote:
>> >
>> > > To better answer Hagay's question, I would like to dive down a
>> bit deeper
>> > > on the relation between MXNet, NNVM and model exchange format
>> like ONNX.
>> > >
>> > > There are two major trends in deep learning 

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Tianqi Chen
OK, there is some miscommunication in here I guess.  We only need to do a
"canonization" step in python API that goes a symbol to symbol translation
layer. It can be done in purely in python, and there is no need for going
"down" into c++ to do this.

For example, the current nnvm.from_mxnet API takes Module or Gluon module
and get you back nnvm/top graph in python.

All we are asking for is to decomposing it into

def mxnet_to_onnx(module):
   nnvm_graph, params = nnvm_from_mxnet(module)
   onnx = nnvm_to_onnx(nnvm_graph, params)
   return onnx

This allows nnvm_from_mxnet to be reused for other purposes, like compiling
API to deployable modules

Tianqi

On Wed, Oct 18, 2017 at 9:55 PM, Lupesko, Hagay  wrote:

> Tianqi:
> Thanks for detailing the trends. I fully agree that ONNX is just a graph
> serialization format – nothing more, nothing less. I also think we all
> agree that this simple mechanism holds lots of value to DL users since it
> allows them to move between frameworks easily (e.g. train with MXNet,
> deploy on a mobile device with Caffe2, or the other way around).
> As you said, In Memory IR is different than serialization formats such as
> ONNX. They are designed to make the runtime execution as efficient as
> possible, leveraging software and hardware optimizations. They are indeed
> complex, and where the “meat” is.
> (BTW ONNX regards itself as an “IR” format, but not in the same sense as
> NNVM).
>
> At the end of the day, Roshani is aiming to deliver a simple functionality
> to MXNet users: (1) take an ONNX file, and load it into MXNet so you get a
> graph+weights you can work with (2) Given a trained model, save it as an
> ONNX file. Since MXNet users do not interact with NNVM directly, but rather
> interact with MXNet API (MXNet Module), isn’t the simplest thing to do is
> just to construct the Module “on the fly” using MXNet API? Taking the other
> approach, we will go from the top level MXNet “load” API, go “down” to NNVM
> to construct the graph, go back up to MXNet to expose it as a Module. This
> seems to complex and does not add any benefit. In whatever way we construct
> the MXNet Module object, NNVM will always be the underlying in memory IR
> that is being executed, so why not take the simpler route?
>
> Hagay
>
> On 10/18/17, 19:42, "Tianqi Chen"  tqc...@cs.washington.edu> wrote:
>
> Hi Chris:
>
> There is no intention to move things away from mxnet. The reduction of
> lines of code by having a better design in general, and usually, you
> write
> less redundant code by benefiting from better design. As I may quote:
> "the
> best design is not achieved not when there is nothing to add, but when
> there is nothing to be taken away."
>
> MXNet has always benefited from this philosophy and improves with the
> new
> designs and proper modularization. For example, we see such reduction
> and
> convenience happening when migrating from MXNet's legacy op to the
> NNVM's mechanism. The new mechanism now enables things like sparse
> aware
> support and other stuff which would be much harder to support.
>
> The nnvm/tvm stack comes brings the same benefit(if not more) and it
> will
> only add more features to MXNet itself. Offering more hardware
> backends and
> optimization, allowing us to write less code and spent less time to
> optimize for each backend by going through TVM
>
> Tianqi
>
> On Wed, Oct 18, 2017 at 7:15 PM, Chris Olivier 
> wrote:
>
> > Reduce code base of mxnet? By increasing scope of the dmlc modules?
> Is the
> > intent to make mxnet a thin language wrapper around a group of dmlc
> > modules?
> >
> >
> > On Wed, Oct 18, 2017 at 6:58 PM Tianqi Chen <
> tqc...@cs.washington.edu>
> > wrote:
> >
> > > To better answer Hagay's question, I would like to dive down a bit
> deeper
> > > on the relation between MXNet, NNVM and model exchange format like
> ONNX.
> > >
> > > There are two major trends in deep learning systems now:
> > >
> > > - Common serializable formats, like ONNX and CoreML, that defines
> the
> > model
> > > exchange format.
> > > - The in-memory graph IR for quick optimization and JIT. NNVM,
> > Tensorflow's
> > > XLA falls into this category.
> > >
> > > The exchange formats are great, it only poses a layer of
> conversion,
> > which
> > > is good for exchange. The real meat still comes from the
> compilation and
> > > JIT pipeline you have to offer. For that, we will need an
> in-memory IR,
> > > because of the cost of constructing, serialize could be high for
> the
> > > exchange formats like protobuf.  And usually, the exchange formats
> are
> > > designed in a minimalistic fashion, making it less easy to extend
> more
> > > information to support in-depth optimization like automatic
> quantization,
> > > accelerator support.
> > >
> > > The current MXNet relie

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Lupesko, Hagay
Tianqi:
Thanks for detailing the trends. I fully agree that ONNX is just a graph 
serialization format – nothing more, nothing less. I also think we all agree 
that this simple mechanism holds lots of value to DL users since it allows them 
to move between frameworks easily (e.g. train with MXNet, deploy on a mobile 
device with Caffe2, or the other way around).
As you said, In Memory IR is different than serialization formats such as ONNX. 
They are designed to make the runtime execution as efficient as possible, 
leveraging software and hardware optimizations. They are indeed complex, and 
where the “meat” is.
(BTW ONNX regards itself as an “IR” format, but not in the same sense as NNVM).

At the end of the day, Roshani is aiming to deliver a simple functionality to 
MXNet users: (1) take an ONNX file, and load it into MXNet so you get a 
graph+weights you can work with (2) Given a trained model, save it as an ONNX 
file. Since MXNet users do not interact with NNVM directly, but rather interact 
with MXNet API (MXNet Module), isn’t the simplest thing to do is just to 
construct the Module “on the fly” using MXNet API? Taking the other approach, 
we will go from the top level MXNet “load” API, go “down” to NNVM to construct 
the graph, go back up to MXNet to expose it as a Module. This seems to complex 
and does not add any benefit. In whatever way we construct the MXNet Module 
object, NNVM will always be the underlying in memory IR that is being executed, 
so why not take the simpler route?

Hagay

On 10/18/17, 19:42, "Tianqi Chen"  wrote:

Hi Chris:

There is no intention to move things away from mxnet. The reduction of
lines of code by having a better design in general, and usually, you write
less redundant code by benefiting from better design. As I may quote: "the
best design is not achieved not when there is nothing to add, but when
there is nothing to be taken away."

MXNet has always benefited from this philosophy and improves with the new
designs and proper modularization. For example, we see such reduction and
convenience happening when migrating from MXNet's legacy op to the
NNVM's mechanism. The new mechanism now enables things like sparse aware
support and other stuff which would be much harder to support.

The nnvm/tvm stack comes brings the same benefit(if not more) and it will
only add more features to MXNet itself. Offering more hardware backends and
optimization, allowing us to write less code and spent less time to
optimize for each backend by going through TVM

Tianqi

On Wed, Oct 18, 2017 at 7:15 PM, Chris Olivier 
wrote:

> Reduce code base of mxnet? By increasing scope of the dmlc modules? Is the
> intent to make mxnet a thin language wrapper around a group of dmlc
> modules?
>
>
> On Wed, Oct 18, 2017 at 6:58 PM Tianqi Chen 
> wrote:
>
> > To better answer Hagay's question, I would like to dive down a bit 
deeper
> > on the relation between MXNet, NNVM and model exchange format like ONNX.
> >
> > There are two major trends in deep learning systems now:
> >
> > - Common serializable formats, like ONNX and CoreML, that defines the
> model
> > exchange format.
> > - The in-memory graph IR for quick optimization and JIT. NNVM,
> Tensorflow's
> > XLA falls into this category.
> >
> > The exchange formats are great, it only poses a layer of conversion,
> which
> > is good for exchange. The real meat still comes from the compilation and
> > JIT pipeline you have to offer. For that, we will need an in-memory IR,
> > because of the cost of constructing, serialize could be high for the
> > exchange formats like protobuf.  And usually, the exchange formats are
> > designed in a minimalistic fashion, making it less easy to extend more
> > information to support in-depth optimization like automatic 
quantization,
> > accelerator support.
> >
> > The current MXNet relies on NNVM for in-memory IR manipulation but does
> not
> > contain a compilation component that compiles to the hardware backends.
> > Doing export to an exchange format and then back into NNVM run the
> > compilation poses too much burden that JIT compiler could pay. Using the
> > same in-memory graph IR as the compilation stack give much more 
advantage
> > in terms of this.
> >
> > The newly introduces nnvm/top and compiler offers in-memory graph
> > optimization and compilation and offers more hardware backend directly
> via
> > TVM. We already see promising results in edge deployments with a much
> lower
> > overhead of runtime. We will further benefit quickly from more graph
> > optimizations that it has to offer.
> >
> > Building support around this new paradigm offers us advantage of being
> > future compatible and takes full benefit of th

Re: mxnet Scala Convolution

2017-10-18 Thread YiZhi Liu
Hi TongKe,

The symbols you are looking for are auto-generated by scala macros.
Pls refer to scala-package/macros

2017-10-19 0:40 GMT+00:00 TongKe Xue :
> Hi Rahul,
>
>   Thanks for explaining the high level design + pointing to the
> implementation details.
>
>   Besides reading the C++ code and mentally translating the Scala
> calls, is there a way to get a list of all generated Scala functions?
>
>   I have looked at:
>
> 1. https://mxnet.incubator.apache.org/api/scala/symbol.html
> shows a few examples, but is not exhaustive
>
> 2. 
> https://mxnet.incubator.apache.org/api/scala/docs/index.html#ml.dmlc.mxnet.Symbol
> appears more comprehensive, but I find neither Convolution nor Softmax there.
>
>
> More specifically, my question is: nnvm adds a bunch of Scala bindings
> to C++ code. How do I get a list of all these bindings (name, type of
> inputs, type of output).
>
>
> Thanks!
> --TongKe
>
>
> On Wed, Oct 18, 2017 at 5:28 PM, Rahul Huilgol  wrote:
>> Hi TongKe,
>>
>> These are operators defined in the c++ backend under src/operator. For
>> example convolution is here
>> https://github.com/apache/incubator-mxnet/blob/master/src/operator/convolution.cc
>> . The operators are registered using nnvm, which helps automatically
>> generate the frontend functions.
>>
>> This tutorial on how to add a backend operator
>> 
>> contains information on how to register such operators, which would help
>> you understand the above file.
>> An excerpt from there (for quadratic operator) : "If you use python, when
>> you type import mxnet as mx, two python functions for invoking your backend
>> implementation are generated on the fly: one is for imperative programming
>> registered as mxnet.ndarray.quadratic or mx.nd.quadratic for short; the
>> other one is for symbolic programming registered under module
>> mxnet.symbol.quadratic or mx.sym.quadratic for short."
>>
>> I'd think the Scala package works similarly.
>>
>> Regards,
>> Rahul
>>
>>
>>
>>
>> On Wed, Oct 18, 2017 at 5:06 PM, TongKe Xue  wrote:
>>
>>> My earlier question was a bit messy.
>>>
>>> To rephrase my question:
>>>
>>> 1. Scala AlexNet sample code calls Symbol.Convolution:
>>>
>>> https://github.com/apache/incubator-mxnet/blob/master/
>>> scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples/visualization/
>>> AlexNet.scala#L30
>>>
>>> 2. Symbol.scala does not contain the string "Convolution"
>>>
>>> https://github.com/apache/incubator-mxnet/blob/master/
>>> scala-package/core/src/main/scala/ml/dmlc/mxnet/Symbol.scala#L982
>>>
>>> Question: where/how is Symbol.Convolution defined?
>>>
>>> On Wed, Oct 18, 2017 at 4:10 PM, TongKe Xue  wrote:
>>> > Hi,
>>> >
>>> > I am reading: https://mxnet.incubator.apache.org/api/scala/symbol.html
>>> >
>>> > I see Symbol.Variable, Symbol.Convolution
>>> >
>>> > When I look at Symbol.scala, I see Symbol.Variable at:
>>> > https://github.com/apache/incubator-mxnet/blob/master/
>>> scala-package/core/src/main/scala/ml/dmlc/mxnet/Symbol.scala#L982
>>> >
>>> > However, I can't find where Convolution, SoftMax, FullyConnected, ...
>>> > are defined.
>>> >
>>> > Where are these Symbols defined?
>>> >
>>> > (I have also tried: grep "Convolution" . -R | grep scala | grep def --
>>> > but found nothing).
>>> >
>>> > Thanks,
>>> > --TongKe
>>>
>>
>>
>>
>> --
>> Rahul Huilgol



-- 
Yizhi Liu
DMLC member
Technical Manager
Qihoo 360 Inc, Shanghai, China


Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Tianqi Chen
Hi Chris:

There is no intention to move things away from mxnet. The reduction of
lines of code by having a better design in general, and usually, you write
less redundant code by benefiting from better design. As I may quote: "the
best design is not achieved not when there is nothing to add, but when
there is nothing to be taken away."

MXNet has always benefited from this philosophy and improves with the new
designs and proper modularization. For example, we see such reduction and
convenience happening when migrating from MXNet's legacy op to the
NNVM's mechanism. The new mechanism now enables things like sparse aware
support and other stuff which would be much harder to support.

The nnvm/tvm stack comes brings the same benefit(if not more) and it will
only add more features to MXNet itself. Offering more hardware backends and
optimization, allowing us to write less code and spent less time to
optimize for each backend by going through TVM

Tianqi

On Wed, Oct 18, 2017 at 7:15 PM, Chris Olivier 
wrote:

> Reduce code base of mxnet? By increasing scope of the dmlc modules? Is the
> intent to make mxnet a thin language wrapper around a group of dmlc
> modules?
>
>
> On Wed, Oct 18, 2017 at 6:58 PM Tianqi Chen 
> wrote:
>
> > To better answer Hagay's question, I would like to dive down a bit deeper
> > on the relation between MXNet, NNVM and model exchange format like ONNX.
> >
> > There are two major trends in deep learning systems now:
> >
> > - Common serializable formats, like ONNX and CoreML, that defines the
> model
> > exchange format.
> > - The in-memory graph IR for quick optimization and JIT. NNVM,
> Tensorflow's
> > XLA falls into this category.
> >
> > The exchange formats are great, it only poses a layer of conversion,
> which
> > is good for exchange. The real meat still comes from the compilation and
> > JIT pipeline you have to offer. For that, we will need an in-memory IR,
> > because of the cost of constructing, serialize could be high for the
> > exchange formats like protobuf.  And usually, the exchange formats are
> > designed in a minimalistic fashion, making it less easy to extend more
> > information to support in-depth optimization like automatic quantization,
> > accelerator support.
> >
> > The current MXNet relies on NNVM for in-memory IR manipulation but does
> not
> > contain a compilation component that compiles to the hardware backends.
> > Doing export to an exchange format and then back into NNVM run the
> > compilation poses too much burden that JIT compiler could pay. Using the
> > same in-memory graph IR as the compilation stack give much more advantage
> > in terms of this.
> >
> > The newly introduces nnvm/top and compiler offers in-memory graph
> > optimization and compilation and offers more hardware backend directly
> via
> > TVM. We already see promising results in edge deployments with a much
> lower
> > overhead of runtime. We will further benefit quickly from more graph
> > optimizations that it has to offer.
> >
> > Building support around this new paradigm offers us advantage of being
> > future compatible and takes full benefit of the points I mentioned above
> >
> > Tianqi
> >
> >
> >
> > On Wed, Oct 18, 2017 at 4:57 PM, Lupesko, Hagay 
> wrote:
> >
> > > Roshani – this is an exciting initiative, ONNX support on MXNet will
> > > enable more users to ramp up on MXNet, which is great.
> > >
> > > Tianqi – a few questions and thoughts about your note:
> > > - “More hardware backends to mxnet” – MXNet users get the same benefit
> of
> > > HW support implementing ONNX import on top of MXNet symbolic, right?
> > > - “NNVM Compiler now received contributions from AWS, UW and many other
> > > folks in MXNet community.” – agreed it is ramping up, but when you look
> > at
> > > the data, it is clear that it is very early on for NNVM. Looking at the
> > > repo, it has overall 223 commits, 0 releases. Compare it to MXNet with
> > 6136
> > > commits and 32 releases. It seems to be still early on for NNVM, and
> for
> > a
> > > more reliable initial implementation building the import on top of
> MXNet
> > is
> > > easier, faster and safer. MXNet has lots of users already using the
> > > Symbolic API which hopefully mean that is a mature API that is not
> likely
> > > to have breaking changes or major issues.
> > >
> > > I’m supportive option 1 proposed by Roshani (building serde on top of
> > > MXNet symbolic), but to do it as an encapsulated implementation detail,
> > so
> > > the implementation can be migrated to NNVM or another implementation in
> > the
> > > future, if at that point it seems like the right thing to do.
> > >
> > > Interested in hearing other opinions though…
> > >
> > > Hagay
> > >
> > > On 10/18/17, 14:13, "Tianqi Chen"  > > tqc...@cs.washington.edu> wrote:
> > >
> > > I am strongly recommending going through the nnvm/top. One major
> > > reason in
> > > here is that the support of nnvm/top layer NOT ONLY mean
> > compatibility
> > > of
> > 

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Chris Olivier
Reduce code base of mxnet? By increasing scope of the dmlc modules? Is the
intent to make mxnet a thin language wrapper around a group of dmlc
modules?


On Wed, Oct 18, 2017 at 6:58 PM Tianqi Chen 
wrote:

> To better answer Hagay's question, I would like to dive down a bit deeper
> on the relation between MXNet, NNVM and model exchange format like ONNX.
>
> There are two major trends in deep learning systems now:
>
> - Common serializable formats, like ONNX and CoreML, that defines the model
> exchange format.
> - The in-memory graph IR for quick optimization and JIT. NNVM, Tensorflow's
> XLA falls into this category.
>
> The exchange formats are great, it only poses a layer of conversion, which
> is good for exchange. The real meat still comes from the compilation and
> JIT pipeline you have to offer. For that, we will need an in-memory IR,
> because of the cost of constructing, serialize could be high for the
> exchange formats like protobuf.  And usually, the exchange formats are
> designed in a minimalistic fashion, making it less easy to extend more
> information to support in-depth optimization like automatic quantization,
> accelerator support.
>
> The current MXNet relies on NNVM for in-memory IR manipulation but does not
> contain a compilation component that compiles to the hardware backends.
> Doing export to an exchange format and then back into NNVM run the
> compilation poses too much burden that JIT compiler could pay. Using the
> same in-memory graph IR as the compilation stack give much more advantage
> in terms of this.
>
> The newly introduces nnvm/top and compiler offers in-memory graph
> optimization and compilation and offers more hardware backend directly via
> TVM. We already see promising results in edge deployments with a much lower
> overhead of runtime. We will further benefit quickly from more graph
> optimizations that it has to offer.
>
> Building support around this new paradigm offers us advantage of being
> future compatible and takes full benefit of the points I mentioned above
>
> Tianqi
>
>
>
> On Wed, Oct 18, 2017 at 4:57 PM, Lupesko, Hagay  wrote:
>
> > Roshani – this is an exciting initiative, ONNX support on MXNet will
> > enable more users to ramp up on MXNet, which is great.
> >
> > Tianqi – a few questions and thoughts about your note:
> > - “More hardware backends to mxnet” – MXNet users get the same benefit of
> > HW support implementing ONNX import on top of MXNet symbolic, right?
> > - “NNVM Compiler now received contributions from AWS, UW and many other
> > folks in MXNet community.” – agreed it is ramping up, but when you look
> at
> > the data, it is clear that it is very early on for NNVM. Looking at the
> > repo, it has overall 223 commits, 0 releases. Compare it to MXNet with
> 6136
> > commits and 32 releases. It seems to be still early on for NNVM, and for
> a
> > more reliable initial implementation building the import on top of MXNet
> is
> > easier, faster and safer. MXNet has lots of users already using the
> > Symbolic API which hopefully mean that is a mature API that is not likely
> > to have breaking changes or major issues.
> >
> > I’m supportive option 1 proposed by Roshani (building serde on top of
> > MXNet symbolic), but to do it as an encapsulated implementation detail,
> so
> > the implementation can be migrated to NNVM or another implementation in
> the
> > future, if at that point it seems like the right thing to do.
> >
> > Interested in hearing other opinions though…
> >
> > Hagay
> >
> > On 10/18/17, 14:13, "Tianqi Chen"  > tqc...@cs.washington.edu> wrote:
> >
> > I am strongly recommending going through the nnvm/top. One major
> > reason in
> > here is that the support of nnvm/top layer NOT ONLY mean
> compatibility
> > of
> > model format with onnx. These are the major benefits:
> >
> >
> > - More hardware backends to mxnet, including opencl, metal, Raspberry
> > Pi,
> > web browser. These things are automatically enabled by going through
> > this
> > layer. In general, we design nnvm/tvm stack to resolve the challenge
> of
> > current mxnet's weakness in terms deploying to more hardware
> backends.
> >
> > - More frontend capabilities, nnvm's gluon style IR ingests now from
> > CoreML, ONNX and in future keras. Supporting those will reduce the
> > amount
> > of engineering effort needed.
> >
> > - Future compatibility. We all agree that the future being migrated
> to
> > gluon's API. NNVM/top tries to look ahead by directly adopting the
> > symbolic
> > API to be gluon.
> >
> >
> > I would also like to correct some of the mentioned facts with regard
> to
> > nnvm/tvm stack
> >
> > 1.   Nascent project with few contributors
> >
> > NNVM Compiler now received contributions from AWS, UW and many other
> > folks
> > in MXNet community. NNVM itself is already being used by MXNet.
> > MXNet's internal IR is migrating toward gluon, and its final form
> being

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Tianqi Chen
To better answer Hagay's question, I would like to dive down a bit deeper
on the relation between MXNet, NNVM and model exchange format like ONNX.

There are two major trends in deep learning systems now:

- Common serializable formats, like ONNX and CoreML, that defines the model
exchange format.
- The in-memory graph IR for quick optimization and JIT. NNVM, Tensorflow's
XLA falls into this category.

The exchange formats are great, it only poses a layer of conversion, which
is good for exchange. The real meat still comes from the compilation and
JIT pipeline you have to offer. For that, we will need an in-memory IR,
because of the cost of constructing, serialize could be high for the
exchange formats like protobuf.  And usually, the exchange formats are
designed in a minimalistic fashion, making it less easy to extend more
information to support in-depth optimization like automatic quantization,
accelerator support.

The current MXNet relies on NNVM for in-memory IR manipulation but does not
contain a compilation component that compiles to the hardware backends.
Doing export to an exchange format and then back into NNVM run the
compilation poses too much burden that JIT compiler could pay. Using the
same in-memory graph IR as the compilation stack give much more advantage
in terms of this.

The newly introduces nnvm/top and compiler offers in-memory graph
optimization and compilation and offers more hardware backend directly via
TVM. We already see promising results in edge deployments with a much lower
overhead of runtime. We will further benefit quickly from more graph
optimizations that it has to offer.

Building support around this new paradigm offers us advantage of being
future compatible and takes full benefit of the points I mentioned above

Tianqi



On Wed, Oct 18, 2017 at 4:57 PM, Lupesko, Hagay  wrote:

> Roshani – this is an exciting initiative, ONNX support on MXNet will
> enable more users to ramp up on MXNet, which is great.
>
> Tianqi – a few questions and thoughts about your note:
> - “More hardware backends to mxnet” – MXNet users get the same benefit of
> HW support implementing ONNX import on top of MXNet symbolic, right?
> - “NNVM Compiler now received contributions from AWS, UW and many other
> folks in MXNet community.” – agreed it is ramping up, but when you look at
> the data, it is clear that it is very early on for NNVM. Looking at the
> repo, it has overall 223 commits, 0 releases. Compare it to MXNet with 6136
> commits and 32 releases. It seems to be still early on for NNVM, and for a
> more reliable initial implementation building the import on top of MXNet is
> easier, faster and safer. MXNet has lots of users already using the
> Symbolic API which hopefully mean that is a mature API that is not likely
> to have breaking changes or major issues.
>
> I’m supportive option 1 proposed by Roshani (building serde on top of
> MXNet symbolic), but to do it as an encapsulated implementation detail, so
> the implementation can be migrated to NNVM or another implementation in the
> future, if at that point it seems like the right thing to do.
>
> Interested in hearing other opinions though…
>
> Hagay
>
> On 10/18/17, 14:13, "Tianqi Chen"  tqc...@cs.washington.edu> wrote:
>
> I am strongly recommending going through the nnvm/top. One major
> reason in
> here is that the support of nnvm/top layer NOT ONLY mean compatibility
> of
> model format with onnx. These are the major benefits:
>
>
> - More hardware backends to mxnet, including opencl, metal, Raspberry
> Pi,
> web browser. These things are automatically enabled by going through
> this
> layer. In general, we design nnvm/tvm stack to resolve the challenge of
> current mxnet's weakness in terms deploying to more hardware backends.
>
> - More frontend capabilities, nnvm's gluon style IR ingests now from
> CoreML, ONNX and in future keras. Supporting those will reduce the
> amount
> of engineering effort needed.
>
> - Future compatibility. We all agree that the future being migrated to
> gluon's API. NNVM/top tries to look ahead by directly adopting the
> symbolic
> API to be gluon.
>
>
> I would also like to correct some of the mentioned facts with regard to
> nnvm/tvm stack
>
> 1.   Nascent project with few contributors
>
> NNVM Compiler now received contributions from AWS, UW and many other
> folks
> in MXNet community. NNVM itself is already being used by MXNet.
> MXNet's internal IR is migrating toward gluon, and its final form being
> nnvm/top
>
> 3.   Does not support all operators that exist in MXNet Symbolic API
>
> Neither NNVM/top or onnx support all operators that exist in mxnet
> symbolic
> API. The end goal here is mainly to make nnvm/top onnx compatible,
> which is
> a more reasonable goal.
>
> 4.  No CI Pipeline and testcases
>
> NNVM already contains a compiler contains unittests and ci tested with
> integrat

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Tianqi Chen
I think the point here is that API stays the same, and the discussion is
only about how we should implement it.

Tianqi

On Wed, Oct 18, 2017 at 6:43 PM, Dom Divakaruni <
dominic.divakar...@gmail.com> wrote:

> I imagine users would want to interact with MXNet as they normally do to
> consume or export an ONNX format. How would that work with NNVM? Not sure
> users care about the implementation, as long as it doesn’t add another
> layer of complexity to the workflow.
>
> Regards,
> Dom
>
>
> > On Oct 18, 2017, at 6:29 PM, Tianqi Chen 
> wrote:
> >
> > We plan to incubate nnvm and it and make it apache eventually.  NNVM as
> it
> > is now adopted apache model, as did MXNet originally.
> >
> > My suggestion is mainly for evolving the Apache MXNet to become healthier
> > and cleaner in the longer term, with fewer number lines of code while
> > supporting more features, and easier to maintain in general,  NNVM/TVM
> > stack is a crucial step in that direction.
> >
> > The fact is either way in current discussion won't cost a lot of
> > engineering overhead (Zhi did the onnx->nnvm in around a week).
> >
> > Tianqi
> >
> > On Wed, Oct 18, 2017 at 6:09 PM, Chris Olivier 
> > wrote:
> >
> >> My $0.02:
> >>
> >> NNVM is not currently an Apache module.  It’s under dmlc umbrella, whose
> >> direction and governance is unclear. For this reason, I am inclined to
> >> support new effort being places in Apache MXNet
> >>
> >>
> >> -Chris
> >>
> >> On Wed, Oct 18, 2017 at 5:19 PM Tianqi Chen 
> >> wrote:
> >>
> 
>  - “More hardware backends to mxnet” – MXNet users get the same benefit
> >> of
>  HW support implementing ONNX import on top of MXNet symbolic, right?
> 
> >>>
> >>> The support of nnvm compiler compilation comes directly from going into
> >>> nnvm/top. This include supporting interesting operators onnx do not yet
> >>> support(e.g. broadcast arithmetics) and real compilation pipeline to
> >> code.
> >>>
> >>>
>  - “NNVM Compiler now received contributions from AWS, UW and many
> other
>  folks in MXNet community.” – agreed it is ramping up, but when you
> look
> >>> at
>  the data, it is clear that it is very early on for NNVM. Looking at
> the
>  repo, it has overall 223 commits, 0 releases. Compare it to MXNet with
> >>> 6136
>  commits and 32 releases. It seems to be still early on for NNVM, and
> >> for
> >>> a
>  more reliable initial implementation building the import on top of
> >> MXNet
> >>> is
>  easier, faster and safer. MXNet has lots of users already using the
>  Symbolic API which hopefully mean that is a mature API that is not
> >> likely
>  to have breaking changes or major issues.
> 
> >>>
> >>> One major reason that NNVM itself get less commit, is because it learns
> >>> already a lot of lessons from pains we had when building MXNet. Note
> that
> >>> the MXNet's symbolic API itself is built on top of NNVM for more than a
> >>> year now.
> >>>
> >>> The only difference between mxnet's current symbolic API and nnvm/top
> 's
> >>> API is:
> >>> - MXNet's API contains legacy issues due to backward compatibility, we
> >>> might consider deprecate some of them.
> >>> - nnvm/top operators do not suffer from legacy issues and strictly
> >> follows
> >>> convention of numpy and Gluon.
> >>> - In that sense, actually nnvm/top's symbolic API is cleaner and more
> >>> stable, and is the final form we want to migrate into.
> >>>
> >>> Tianqi
> >>>
> >>>
>  On 10/18/17, 14:13, "Tianqi Chen"   tqc...@cs.washington.edu> wrote:
> 
> I am strongly recommending going through the nnvm/top. One major
>  reason in
> here is that the support of nnvm/top layer NOT ONLY mean
> >>> compatibility
>  of
> model format with onnx. These are the major benefits:
> 
> 
> - More hardware backends to mxnet, including opencl, metal,
> >> Raspberry
>  Pi,
> web browser. These things are automatically enabled by going
> >> through
>  this
> layer. In general, we design nnvm/tvm stack to resolve the
> >> challenge
> >>> of
> current mxnet's weakness in terms deploying to more hardware
> >>> backends.
> 
> - More frontend capabilities, nnvm's gluon style IR ingests now
> >> from
> CoreML, ONNX and in future keras. Supporting those will reduce the
>  amount
> of engineering effort needed.
> 
> - Future compatibility. We all agree that the future being migrated
> >>> to
> gluon's API. NNVM/top tries to look ahead by directly adopting the
>  symbolic
> API to be gluon.
> 
> 
> I would also like to correct some of the mentioned facts with
> >> regard
> >>> to
> nnvm/tvm stack
> 
> 1.   Nascent project with few contributors
> 
> NNVM Compiler now received contributions from AWS, UW and many
> >> other
>  folks
> in MXNet community. NNVM itself is already being used by 

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Dom Divakaruni
I imagine users would want to interact with MXNet as they normally do to 
consume or export an ONNX format. How would that work with NNVM? Not sure users 
care about the implementation, as long as it doesn’t add another layer of 
complexity to the workflow. 

Regards,
Dom


> On Oct 18, 2017, at 6:29 PM, Tianqi Chen  wrote:
> 
> We plan to incubate nnvm and it and make it apache eventually.  NNVM as it
> is now adopted apache model, as did MXNet originally.
> 
> My suggestion is mainly for evolving the Apache MXNet to become healthier
> and cleaner in the longer term, with fewer number lines of code while
> supporting more features, and easier to maintain in general,  NNVM/TVM
> stack is a crucial step in that direction.
> 
> The fact is either way in current discussion won't cost a lot of
> engineering overhead (Zhi did the onnx->nnvm in around a week).
> 
> Tianqi
> 
> On Wed, Oct 18, 2017 at 6:09 PM, Chris Olivier 
> wrote:
> 
>> My $0.02:
>> 
>> NNVM is not currently an Apache module.  It’s under dmlc umbrella, whose
>> direction and governance is unclear. For this reason, I am inclined to
>> support new effort being places in Apache MXNet
>> 
>> 
>> -Chris
>> 
>> On Wed, Oct 18, 2017 at 5:19 PM Tianqi Chen 
>> wrote:
>> 
 
 - “More hardware backends to mxnet” – MXNet users get the same benefit
>> of
 HW support implementing ONNX import on top of MXNet symbolic, right?
 
>>> 
>>> The support of nnvm compiler compilation comes directly from going into
>>> nnvm/top. This include supporting interesting operators onnx do not yet
>>> support(e.g. broadcast arithmetics) and real compilation pipeline to
>> code.
>>> 
>>> 
 - “NNVM Compiler now received contributions from AWS, UW and many other
 folks in MXNet community.” – agreed it is ramping up, but when you look
>>> at
 the data, it is clear that it is very early on for NNVM. Looking at the
 repo, it has overall 223 commits, 0 releases. Compare it to MXNet with
>>> 6136
 commits and 32 releases. It seems to be still early on for NNVM, and
>> for
>>> a
 more reliable initial implementation building the import on top of
>> MXNet
>>> is
 easier, faster and safer. MXNet has lots of users already using the
 Symbolic API which hopefully mean that is a mature API that is not
>> likely
 to have breaking changes or major issues.
 
>>> 
>>> One major reason that NNVM itself get less commit, is because it learns
>>> already a lot of lessons from pains we had when building MXNet. Note that
>>> the MXNet's symbolic API itself is built on top of NNVM for more than a
>>> year now.
>>> 
>>> The only difference between mxnet's current symbolic API and nnvm/top 's
>>> API is:
>>> - MXNet's API contains legacy issues due to backward compatibility, we
>>> might consider deprecate some of them.
>>> - nnvm/top operators do not suffer from legacy issues and strictly
>> follows
>>> convention of numpy and Gluon.
>>> - In that sense, actually nnvm/top's symbolic API is cleaner and more
>>> stable, and is the final form we want to migrate into.
>>> 
>>> Tianqi
>>> 
>>> 
 On 10/18/17, 14:13, "Tianqi Chen" >>> tqc...@cs.washington.edu> wrote:
 
I am strongly recommending going through the nnvm/top. One major
 reason in
here is that the support of nnvm/top layer NOT ONLY mean
>>> compatibility
 of
model format with onnx. These are the major benefits:
 
 
- More hardware backends to mxnet, including opencl, metal,
>> Raspberry
 Pi,
web browser. These things are automatically enabled by going
>> through
 this
layer. In general, we design nnvm/tvm stack to resolve the
>> challenge
>>> of
current mxnet's weakness in terms deploying to more hardware
>>> backends.
 
- More frontend capabilities, nnvm's gluon style IR ingests now
>> from
CoreML, ONNX and in future keras. Supporting those will reduce the
 amount
of engineering effort needed.
 
- Future compatibility. We all agree that the future being migrated
>>> to
gluon's API. NNVM/top tries to look ahead by directly adopting the
 symbolic
API to be gluon.
 
 
I would also like to correct some of the mentioned facts with
>> regard
>>> to
nnvm/tvm stack
 
1.   Nascent project with few contributors
 
NNVM Compiler now received contributions from AWS, UW and many
>> other
 folks
in MXNet community. NNVM itself is already being used by MXNet.
MXNet's internal IR is migrating toward gluon, and its final form
>>> being
nnvm/top
 
3.   Does not support all operators that exist in MXNet Symbolic
>> API
 
Neither NNVM/top or onnx support all operators that exist in mxnet
 symbolic
API. The end goal here is mainly to make nnvm/top onnx compatible,
 which is
a more reasonable goal.
 
4.  No CI Pipeline and testca

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Mu Li
Hi Hagay,
As mentioned on my previous thread, " MXNet has lots of users already using
the Symbolic API which hopefully mean that is a mature API that is not
likely to have breaking changes or major issues." actually indicate NNVM is
stable. Because MXNet uses NNVM's symbolic.h directly, see
https://github.com/dmlc/nnvm/blob/master/include/nnvm/symbolic.h

But I do agree that NNVM/TOP is newer than MXNet/Operator. But NNVM/TOP is
designed according to the lessons we learned in the past two years, and we
are pushing it to be the standard for the mxnet ecosystem.

I didn't see the value to implement the convert based on the older
MXNet/Operator interface. It does not align with the strategic roadmap of
TVM. It could waste our engineers' value time because we need to re-do it
later. Also, it disobeys the LP "Insist on the Highest Standards" given the
benefits of NNVM/TOP Tianqi explained.

One concern is developers may be less familiar with NNVM/TOP, so it could
take more days to work on the NNVM/TOP approach. But the community can help
on it.


On Wed, Oct 18, 2017 at 4:57 PM, Lupesko, Hagay  wrote:

> Roshani – this is an exciting initiative, ONNX support on MXNet will
> enable more users to ramp up on MXNet, which is great.
>
> Tianqi – a few questions and thoughts about your note:
> - “More hardware backends to mxnet” – MXNet users get the same benefit of
> HW support implementing ONNX import on top of MXNet symbolic, right?
> - “NNVM Compiler now received contributions from AWS, UW and many other
> folks in MXNet community.” – agreed it is ramping up, but when you look at
> the data, it is clear that it is very early on for NNVM. Looking at the
> repo, it has overall 223 commits, 0 releases. Compare it to MXNet with 6136
> commits and 32 releases. It seems to be still early on for NNVM, and for a
> more reliable initial implementation building the import on top of MXNet is
> easier, faster and safer. MXNet has lots of users already using the
> Symbolic API which hopefully mean that is a mature API that is not likely
> to have breaking changes or major issues.
>
> I’m supportive option 1 proposed by Roshani (building serde on top of
> MXNet symbolic), but to do it as an encapsulated implementation detail, so
> the implementation can be migrated to NNVM or another implementation in the
> future, if at that point it seems like the right thing to do.
>
> Interested in hearing other opinions though…
>
> Hagay
>
> On 10/18/17, 14:13, "Tianqi Chen"  tqc...@cs.washington.edu> wrote:
>
> I am strongly recommending going through the nnvm/top. One major
> reason in
> here is that the support of nnvm/top layer NOT ONLY mean compatibility
> of
> model format with onnx. These are the major benefits:
>
>
> - More hardware backends to mxnet, including opencl, metal, Raspberry
> Pi,
> web browser. These things are automatically enabled by going through
> this
> layer. In general, we design nnvm/tvm stack to resolve the challenge of
> current mxnet's weakness in terms deploying to more hardware backends.
>
> - More frontend capabilities, nnvm's gluon style IR ingests now from
> CoreML, ONNX and in future keras. Supporting those will reduce the
> amount
> of engineering effort needed.
>
> - Future compatibility. We all agree that the future being migrated to
> gluon's API. NNVM/top tries to look ahead by directly adopting the
> symbolic
> API to be gluon.
>
>
> I would also like to correct some of the mentioned facts with regard to
> nnvm/tvm stack
>
> 1.   Nascent project with few contributors
>
> NNVM Compiler now received contributions from AWS, UW and many other
> folks
> in MXNet community. NNVM itself is already being used by MXNet.
> MXNet's internal IR is migrating toward gluon, and its final form being
> nnvm/top
>
> 3.   Does not support all operators that exist in MXNet Symbolic API
>
> Neither NNVM/top or onnx support all operators that exist in mxnet
> symbolic
> API. The end goal here is mainly to make nnvm/top onnx compatible,
> which is
> a more reasonable goal.
>
> 4.  No CI Pipeline and testcases
>
> NNVM already contains a compiler contains unittests and ci tested with
> integration  https://github.com/dmlc/nnvm, with a CI pipline that is
> well
> tested on CPU and GPU cases for front-ends.
>
> Tianqi
>
>
> On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote <
> roshaninagmo...@gmail.com>
> wrote:
>
> > Hi guys,
> >
> >
> > I am working on supporting ONNX 
> pre-trained
> > models in Apache MXNet and would like to seek your opinion on the
> choice of
> > implementation. I also have created a GitHub issue
> > . Supporting
> ONNX
> > in
> > MXNet will enable users to move between frameworks with their
> models, this
> > will also enable MXNet project to be a part of

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Tianqi Chen
We plan to incubate nnvm and it and make it apache eventually.  NNVM as it
is now adopted apache model, as did MXNet originally.

My suggestion is mainly for evolving the Apache MXNet to become healthier
and cleaner in the longer term, with fewer number lines of code while
supporting more features, and easier to maintain in general,  NNVM/TVM
stack is a crucial step in that direction.

The fact is either way in current discussion won't cost a lot of
engineering overhead (Zhi did the onnx->nnvm in around a week).

Tianqi

On Wed, Oct 18, 2017 at 6:09 PM, Chris Olivier 
wrote:

> My $0.02:
>
> NNVM is not currently an Apache module.  It’s under dmlc umbrella, whose
> direction and governance is unclear. For this reason, I am inclined to
> support new effort being places in Apache MXNet
>
>
> -Chris
>
> On Wed, Oct 18, 2017 at 5:19 PM Tianqi Chen 
> wrote:
>
> > >
> > > - “More hardware backends to mxnet” – MXNet users get the same benefit
> of
> > > HW support implementing ONNX import on top of MXNet symbolic, right?
> > >
> >
> > The support of nnvm compiler compilation comes directly from going into
> > nnvm/top. This include supporting interesting operators onnx do not yet
> > support(e.g. broadcast arithmetics) and real compilation pipeline to
> code.
> >
> >
> > > - “NNVM Compiler now received contributions from AWS, UW and many other
> > > folks in MXNet community.” – agreed it is ramping up, but when you look
> > at
> > > the data, it is clear that it is very early on for NNVM. Looking at the
> > > repo, it has overall 223 commits, 0 releases. Compare it to MXNet with
> > 6136
> > > commits and 32 releases. It seems to be still early on for NNVM, and
> for
> > a
> > > more reliable initial implementation building the import on top of
> MXNet
> > is
> > > easier, faster and safer. MXNet has lots of users already using the
> > > Symbolic API which hopefully mean that is a mature API that is not
> likely
> > > to have breaking changes or major issues.
> > >
> >
> > One major reason that NNVM itself get less commit, is because it learns
> > already a lot of lessons from pains we had when building MXNet. Note that
> > the MXNet's symbolic API itself is built on top of NNVM for more than a
> > year now.
> >
> > The only difference between mxnet's current symbolic API and nnvm/top 's
> > API is:
> > - MXNet's API contains legacy issues due to backward compatibility, we
> > might consider deprecate some of them.
> > - nnvm/top operators do not suffer from legacy issues and strictly
> follows
> > convention of numpy and Gluon.
> > - In that sense, actually nnvm/top's symbolic API is cleaner and more
> > stable, and is the final form we want to migrate into.
> >
> > Tianqi
> >
> >
> > > On 10/18/17, 14:13, "Tianqi Chen"  > > tqc...@cs.washington.edu> wrote:
> > >
> > > I am strongly recommending going through the nnvm/top. One major
> > > reason in
> > > here is that the support of nnvm/top layer NOT ONLY mean
> > compatibility
> > > of
> > > model format with onnx. These are the major benefits:
> > >
> > >
> > > - More hardware backends to mxnet, including opencl, metal,
> Raspberry
> > > Pi,
> > > web browser. These things are automatically enabled by going
> through
> > > this
> > > layer. In general, we design nnvm/tvm stack to resolve the
> challenge
> > of
> > > current mxnet's weakness in terms deploying to more hardware
> > backends.
> > >
> > > - More frontend capabilities, nnvm's gluon style IR ingests now
> from
> > > CoreML, ONNX and in future keras. Supporting those will reduce the
> > > amount
> > > of engineering effort needed.
> > >
> > > - Future compatibility. We all agree that the future being migrated
> > to
> > > gluon's API. NNVM/top tries to look ahead by directly adopting the
> > > symbolic
> > > API to be gluon.
> > >
> > >
> > > I would also like to correct some of the mentioned facts with
> regard
> > to
> > > nnvm/tvm stack
> > >
> > > 1.   Nascent project with few contributors
> > >
> > > NNVM Compiler now received contributions from AWS, UW and many
> other
> > > folks
> > > in MXNet community. NNVM itself is already being used by MXNet.
> > > MXNet's internal IR is migrating toward gluon, and its final form
> > being
> > > nnvm/top
> > >
> > > 3.   Does not support all operators that exist in MXNet Symbolic
> API
> > >
> > > Neither NNVM/top or onnx support all operators that exist in mxnet
> > > symbolic
> > > API. The end goal here is mainly to make nnvm/top onnx compatible,
> > > which is
> > > a more reasonable goal.
> > >
> > > 4.  No CI Pipeline and testcases
> > >
> > > NNVM already contains a compiler contains unittests and ci tested
> > with
> > > integration  https://github.com/dmlc/nnvm, with a CI pipline that
> is
> > > well
> > > tested on CPU and GPU cases for front-ends.
> > >
> > > Tianqi
> > >
> > >
> > > On Wed, Oct 18, 2017 at 1:41 PM, Roshani 

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Mu Li
I cannot get the point. MXNet relies on NNVM. In fact, the Symbol object in
MXNet is defined on NNVM.

On Wed, Oct 18, 2017 at 6:09 PM, Chris Olivier 
wrote:

> My $0.02:
>
> NNVM is not currently an Apache module.  It’s under dmlc umbrella, whose
> direction and governance is unclear. For this reason, I am inclined to
> support new effort being places in Apache MXNet
>
>
> -Chris
>
> On Wed, Oct 18, 2017 at 5:19 PM Tianqi Chen 
> wrote:
>
> > >
> > > - “More hardware backends to mxnet” – MXNet users get the same benefit
> of
> > > HW support implementing ONNX import on top of MXNet symbolic, right?
> > >
> >
> > The support of nnvm compiler compilation comes directly from going into
> > nnvm/top. This include supporting interesting operators onnx do not yet
> > support(e.g. broadcast arithmetics) and real compilation pipeline to
> code.
> >
> >
> > > - “NNVM Compiler now received contributions from AWS, UW and many other
> > > folks in MXNet community.” – agreed it is ramping up, but when you look
> > at
> > > the data, it is clear that it is very early on for NNVM. Looking at the
> > > repo, it has overall 223 commits, 0 releases. Compare it to MXNet with
> > 6136
> > > commits and 32 releases. It seems to be still early on for NNVM, and
> for
> > a
> > > more reliable initial implementation building the import on top of
> MXNet
> > is
> > > easier, faster and safer. MXNet has lots of users already using the
> > > Symbolic API which hopefully mean that is a mature API that is not
> likely
> > > to have breaking changes or major issues.
> > >
> >
> > One major reason that NNVM itself get less commit, is because it learns
> > already a lot of lessons from pains we had when building MXNet. Note that
> > the MXNet's symbolic API itself is built on top of NNVM for more than a
> > year now.
> >
> > The only difference between mxnet's current symbolic API and nnvm/top 's
> > API is:
> > - MXNet's API contains legacy issues due to backward compatibility, we
> > might consider deprecate some of them.
> > - nnvm/top operators do not suffer from legacy issues and strictly
> follows
> > convention of numpy and Gluon.
> > - In that sense, actually nnvm/top's symbolic API is cleaner and more
> > stable, and is the final form we want to migrate into.
> >
> > Tianqi
> >
> >
> > > On 10/18/17, 14:13, "Tianqi Chen"  > > tqc...@cs.washington.edu> wrote:
> > >
> > > I am strongly recommending going through the nnvm/top. One major
> > > reason in
> > > here is that the support of nnvm/top layer NOT ONLY mean
> > compatibility
> > > of
> > > model format with onnx. These are the major benefits:
> > >
> > >
> > > - More hardware backends to mxnet, including opencl, metal,
> Raspberry
> > > Pi,
> > > web browser. These things are automatically enabled by going
> through
> > > this
> > > layer. In general, we design nnvm/tvm stack to resolve the
> challenge
> > of
> > > current mxnet's weakness in terms deploying to more hardware
> > backends.
> > >
> > > - More frontend capabilities, nnvm's gluon style IR ingests now
> from
> > > CoreML, ONNX and in future keras. Supporting those will reduce the
> > > amount
> > > of engineering effort needed.
> > >
> > > - Future compatibility. We all agree that the future being migrated
> > to
> > > gluon's API. NNVM/top tries to look ahead by directly adopting the
> > > symbolic
> > > API to be gluon.
> > >
> > >
> > > I would also like to correct some of the mentioned facts with
> regard
> > to
> > > nnvm/tvm stack
> > >
> > > 1.   Nascent project with few contributors
> > >
> > > NNVM Compiler now received contributions from AWS, UW and many
> other
> > > folks
> > > in MXNet community. NNVM itself is already being used by MXNet.
> > > MXNet's internal IR is migrating toward gluon, and its final form
> > being
> > > nnvm/top
> > >
> > > 3.   Does not support all operators that exist in MXNet Symbolic
> API
> > >
> > > Neither NNVM/top or onnx support all operators that exist in mxnet
> > > symbolic
> > > API. The end goal here is mainly to make nnvm/top onnx compatible,
> > > which is
> > > a more reasonable goal.
> > >
> > > 4.  No CI Pipeline and testcases
> > >
> > > NNVM already contains a compiler contains unittests and ci tested
> > with
> > > integration  https://github.com/dmlc/nnvm, with a CI pipline that
> is
> > > well
> > > tested on CPU and GPU cases for front-ends.
> > >
> > > Tianqi
> > >
> > >
> > > On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote <
> > > roshaninagmo...@gmail.com>
> > > wrote:
> > >
> > > > Hi guys,
> > > >
> > > >
> > > > I am working on supporting ONNX 
> > > pre-trained
> > > > models in Apache MXNet and would like to seek your opinion on the
> > > choice of
> > > > implementation. I also have created a GitHub issue
> > > > 

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Chris Olivier
My $0.02:

NNVM is not currently an Apache module.  It’s under dmlc umbrella, whose
direction and governance is unclear. For this reason, I am inclined to
support new effort being places in Apache MXNet


-Chris

On Wed, Oct 18, 2017 at 5:19 PM Tianqi Chen 
wrote:

> >
> > - “More hardware backends to mxnet” – MXNet users get the same benefit of
> > HW support implementing ONNX import on top of MXNet symbolic, right?
> >
>
> The support of nnvm compiler compilation comes directly from going into
> nnvm/top. This include supporting interesting operators onnx do not yet
> support(e.g. broadcast arithmetics) and real compilation pipeline to code.
>
>
> > - “NNVM Compiler now received contributions from AWS, UW and many other
> > folks in MXNet community.” – agreed it is ramping up, but when you look
> at
> > the data, it is clear that it is very early on for NNVM. Looking at the
> > repo, it has overall 223 commits, 0 releases. Compare it to MXNet with
> 6136
> > commits and 32 releases. It seems to be still early on for NNVM, and for
> a
> > more reliable initial implementation building the import on top of MXNet
> is
> > easier, faster and safer. MXNet has lots of users already using the
> > Symbolic API which hopefully mean that is a mature API that is not likely
> > to have breaking changes or major issues.
> >
>
> One major reason that NNVM itself get less commit, is because it learns
> already a lot of lessons from pains we had when building MXNet. Note that
> the MXNet's symbolic API itself is built on top of NNVM for more than a
> year now.
>
> The only difference between mxnet's current symbolic API and nnvm/top 's
> API is:
> - MXNet's API contains legacy issues due to backward compatibility, we
> might consider deprecate some of them.
> - nnvm/top operators do not suffer from legacy issues and strictly follows
> convention of numpy and Gluon.
> - In that sense, actually nnvm/top's symbolic API is cleaner and more
> stable, and is the final form we want to migrate into.
>
> Tianqi
>
>
> > On 10/18/17, 14:13, "Tianqi Chen"  > tqc...@cs.washington.edu> wrote:
> >
> > I am strongly recommending going through the nnvm/top. One major
> > reason in
> > here is that the support of nnvm/top layer NOT ONLY mean
> compatibility
> > of
> > model format with onnx. These are the major benefits:
> >
> >
> > - More hardware backends to mxnet, including opencl, metal, Raspberry
> > Pi,
> > web browser. These things are automatically enabled by going through
> > this
> > layer. In general, we design nnvm/tvm stack to resolve the challenge
> of
> > current mxnet's weakness in terms deploying to more hardware
> backends.
> >
> > - More frontend capabilities, nnvm's gluon style IR ingests now from
> > CoreML, ONNX and in future keras. Supporting those will reduce the
> > amount
> > of engineering effort needed.
> >
> > - Future compatibility. We all agree that the future being migrated
> to
> > gluon's API. NNVM/top tries to look ahead by directly adopting the
> > symbolic
> > API to be gluon.
> >
> >
> > I would also like to correct some of the mentioned facts with regard
> to
> > nnvm/tvm stack
> >
> > 1.   Nascent project with few contributors
> >
> > NNVM Compiler now received contributions from AWS, UW and many other
> > folks
> > in MXNet community. NNVM itself is already being used by MXNet.
> > MXNet's internal IR is migrating toward gluon, and its final form
> being
> > nnvm/top
> >
> > 3.   Does not support all operators that exist in MXNet Symbolic API
> >
> > Neither NNVM/top or onnx support all operators that exist in mxnet
> > symbolic
> > API. The end goal here is mainly to make nnvm/top onnx compatible,
> > which is
> > a more reasonable goal.
> >
> > 4.  No CI Pipeline and testcases
> >
> > NNVM already contains a compiler contains unittests and ci tested
> with
> > integration  https://github.com/dmlc/nnvm, with a CI pipline that is
> > well
> > tested on CPU and GPU cases for front-ends.
> >
> > Tianqi
> >
> >
> > On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote <
> > roshaninagmo...@gmail.com>
> > wrote:
> >
> > > Hi guys,
> > >
> > >
> > > I am working on supporting ONNX 
> > pre-trained
> > > models in Apache MXNet and would like to seek your opinion on the
> > choice of
> > > implementation. I also have created a GitHub issue
> > > .
> Supporting
> > ONNX
> > > in
> > > MXNet will enable users to move between frameworks with their
> > models, this
> > > will also enable MXNet project to be a part of the ONNX open
> > standard and
> > > steer the direction of ONNX.
> > >
> > >
> > > For those who don’t know ONNX, ONNX is an open source format for AI
> > models
> > > which enables models to be transferred between frameworks. Refer to

[BUILD FAILED] Branch master build 545

2017-10-18 Thread Apache Jenkins Server
Build for MXNet branch master has broken. Please view the build at 
https://builds.apache.org/job/incubator-mxnet/job/master/545/

Re: mxnet Scala Convolution

2017-10-18 Thread TongKe Xue
Hi Rahul,

  Thanks for explaining the high level design + pointing to the
implementation details.

  Besides reading the C++ code and mentally translating the Scala
calls, is there a way to get a list of all generated Scala functions?

  I have looked at:

1. https://mxnet.incubator.apache.org/api/scala/symbol.html
shows a few examples, but is not exhaustive

2. 
https://mxnet.incubator.apache.org/api/scala/docs/index.html#ml.dmlc.mxnet.Symbol
appears more comprehensive, but I find neither Convolution nor Softmax there.


More specifically, my question is: nnvm adds a bunch of Scala bindings
to C++ code. How do I get a list of all these bindings (name, type of
inputs, type of output).


Thanks!
--TongKe


On Wed, Oct 18, 2017 at 5:28 PM, Rahul Huilgol  wrote:
> Hi TongKe,
>
> These are operators defined in the c++ backend under src/operator. For
> example convolution is here
> https://github.com/apache/incubator-mxnet/blob/master/src/operator/convolution.cc
> . The operators are registered using nnvm, which helps automatically
> generate the frontend functions.
>
> This tutorial on how to add a backend operator
> 
> contains information on how to register such operators, which would help
> you understand the above file.
> An excerpt from there (for quadratic operator) : "If you use python, when
> you type import mxnet as mx, two python functions for invoking your backend
> implementation are generated on the fly: one is for imperative programming
> registered as mxnet.ndarray.quadratic or mx.nd.quadratic for short; the
> other one is for symbolic programming registered under module
> mxnet.symbol.quadratic or mx.sym.quadratic for short."
>
> I'd think the Scala package works similarly.
>
> Regards,
> Rahul
>
>
>
>
> On Wed, Oct 18, 2017 at 5:06 PM, TongKe Xue  wrote:
>
>> My earlier question was a bit messy.
>>
>> To rephrase my question:
>>
>> 1. Scala AlexNet sample code calls Symbol.Convolution:
>>
>> https://github.com/apache/incubator-mxnet/blob/master/
>> scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples/visualization/
>> AlexNet.scala#L30
>>
>> 2. Symbol.scala does not contain the string "Convolution"
>>
>> https://github.com/apache/incubator-mxnet/blob/master/
>> scala-package/core/src/main/scala/ml/dmlc/mxnet/Symbol.scala#L982
>>
>> Question: where/how is Symbol.Convolution defined?
>>
>> On Wed, Oct 18, 2017 at 4:10 PM, TongKe Xue  wrote:
>> > Hi,
>> >
>> > I am reading: https://mxnet.incubator.apache.org/api/scala/symbol.html
>> >
>> > I see Symbol.Variable, Symbol.Convolution
>> >
>> > When I look at Symbol.scala, I see Symbol.Variable at:
>> > https://github.com/apache/incubator-mxnet/blob/master/
>> scala-package/core/src/main/scala/ml/dmlc/mxnet/Symbol.scala#L982
>> >
>> > However, I can't find where Convolution, SoftMax, FullyConnected, ...
>> > are defined.
>> >
>> > Where are these Symbols defined?
>> >
>> > (I have also tried: grep "Convolution" . -R | grep scala | grep def --
>> > but found nothing).
>> >
>> > Thanks,
>> > --TongKe
>>
>
>
>
> --
> Rahul Huilgol


Re: mxnet Scala Convolution

2017-10-18 Thread Rahul Huilgol
Hi TongKe,

These are operators defined in the c++ backend under src/operator. For
example convolution is here
https://github.com/apache/incubator-mxnet/blob/master/src/operator/convolution.cc
. The operators are registered using nnvm, which helps automatically
generate the frontend functions.

This tutorial on how to add a backend operator

contains information on how to register such operators, which would help
you understand the above file.
An excerpt from there (for quadratic operator) : "If you use python, when
you type import mxnet as mx, two python functions for invoking your backend
implementation are generated on the fly: one is for imperative programming
registered as mxnet.ndarray.quadratic or mx.nd.quadratic for short; the
other one is for symbolic programming registered under module
mxnet.symbol.quadratic or mx.sym.quadratic for short."

I'd think the Scala package works similarly.

Regards,
Rahul




On Wed, Oct 18, 2017 at 5:06 PM, TongKe Xue  wrote:

> My earlier question was a bit messy.
>
> To rephrase my question:
>
> 1. Scala AlexNet sample code calls Symbol.Convolution:
>
> https://github.com/apache/incubator-mxnet/blob/master/
> scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples/visualization/
> AlexNet.scala#L30
>
> 2. Symbol.scala does not contain the string "Convolution"
>
> https://github.com/apache/incubator-mxnet/blob/master/
> scala-package/core/src/main/scala/ml/dmlc/mxnet/Symbol.scala#L982
>
> Question: where/how is Symbol.Convolution defined?
>
> On Wed, Oct 18, 2017 at 4:10 PM, TongKe Xue  wrote:
> > Hi,
> >
> > I am reading: https://mxnet.incubator.apache.org/api/scala/symbol.html
> >
> > I see Symbol.Variable, Symbol.Convolution
> >
> > When I look at Symbol.scala, I see Symbol.Variable at:
> > https://github.com/apache/incubator-mxnet/blob/master/
> scala-package/core/src/main/scala/ml/dmlc/mxnet/Symbol.scala#L982
> >
> > However, I can't find where Convolution, SoftMax, FullyConnected, ...
> > are defined.
> >
> > Where are these Symbols defined?
> >
> > (I have also tried: grep "Convolution" . -R | grep scala | grep def --
> > but found nothing).
> >
> > Thanks,
> > --TongKe
>



-- 
Rahul Huilgol


Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Tianqi Chen
>
> - “More hardware backends to mxnet” – MXNet users get the same benefit of
> HW support implementing ONNX import on top of MXNet symbolic, right?
>

The support of nnvm compiler compilation comes directly from going into
nnvm/top. This include supporting interesting operators onnx do not yet
support(e.g. broadcast arithmetics) and real compilation pipeline to code.


> - “NNVM Compiler now received contributions from AWS, UW and many other
> folks in MXNet community.” – agreed it is ramping up, but when you look at
> the data, it is clear that it is very early on for NNVM. Looking at the
> repo, it has overall 223 commits, 0 releases. Compare it to MXNet with 6136
> commits and 32 releases. It seems to be still early on for NNVM, and for a
> more reliable initial implementation building the import on top of MXNet is
> easier, faster and safer. MXNet has lots of users already using the
> Symbolic API which hopefully mean that is a mature API that is not likely
> to have breaking changes or major issues.
>

One major reason that NNVM itself get less commit, is because it learns
already a lot of lessons from pains we had when building MXNet. Note that
the MXNet's symbolic API itself is built on top of NNVM for more than a
year now.

The only difference between mxnet's current symbolic API and nnvm/top 's
API is:
- MXNet's API contains legacy issues due to backward compatibility, we
might consider deprecate some of them.
- nnvm/top operators do not suffer from legacy issues and strictly follows
convention of numpy and Gluon.
- In that sense, actually nnvm/top's symbolic API is cleaner and more
stable, and is the final form we want to migrate into.

Tianqi


> On 10/18/17, 14:13, "Tianqi Chen"  tqc...@cs.washington.edu> wrote:
>
> I am strongly recommending going through the nnvm/top. One major
> reason in
> here is that the support of nnvm/top layer NOT ONLY mean compatibility
> of
> model format with onnx. These are the major benefits:
>
>
> - More hardware backends to mxnet, including opencl, metal, Raspberry
> Pi,
> web browser. These things are automatically enabled by going through
> this
> layer. In general, we design nnvm/tvm stack to resolve the challenge of
> current mxnet's weakness in terms deploying to more hardware backends.
>
> - More frontend capabilities, nnvm's gluon style IR ingests now from
> CoreML, ONNX and in future keras. Supporting those will reduce the
> amount
> of engineering effort needed.
>
> - Future compatibility. We all agree that the future being migrated to
> gluon's API. NNVM/top tries to look ahead by directly adopting the
> symbolic
> API to be gluon.
>
>
> I would also like to correct some of the mentioned facts with regard to
> nnvm/tvm stack
>
> 1.   Nascent project with few contributors
>
> NNVM Compiler now received contributions from AWS, UW and many other
> folks
> in MXNet community. NNVM itself is already being used by MXNet.
> MXNet's internal IR is migrating toward gluon, and its final form being
> nnvm/top
>
> 3.   Does not support all operators that exist in MXNet Symbolic API
>
> Neither NNVM/top or onnx support all operators that exist in mxnet
> symbolic
> API. The end goal here is mainly to make nnvm/top onnx compatible,
> which is
> a more reasonable goal.
>
> 4.  No CI Pipeline and testcases
>
> NNVM already contains a compiler contains unittests and ci tested with
> integration  https://github.com/dmlc/nnvm, with a CI pipline that is
> well
> tested on CPU and GPU cases for front-ends.
>
> Tianqi
>
>
> On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote <
> roshaninagmo...@gmail.com>
> wrote:
>
> > Hi guys,
> >
> >
> > I am working on supporting ONNX 
> pre-trained
> > models in Apache MXNet and would like to seek your opinion on the
> choice of
> > implementation. I also have created a GitHub issue
> > . Supporting
> ONNX
> > in
> > MXNet will enable users to move between frameworks with their
> models, this
> > will also enable MXNet project to be a part of the ONNX open
> standard and
> > steer the direction of ONNX.
> >
> >
> > For those who don’t know ONNX, ONNX is an open source format for AI
> models
> > which enables models to be transferred between frameworks. Refer to
> > https://github.com/onnx/onnx for more details.
> >
> >
> > To implement the import/export functionality in MXNet, I propose to
> expose
> > a MXNet python module “serde”(name taken from Apache Hive project)
> with the
> > following methods supporting different formats:
> >
> > sym, params = mxnet.serde.import(other_format_file,
> other_format=‘onnx’)
> >
> > other_format_file =  mxnet.serde.export(mxnet_sym, mxnet_params,
> ‘onnx’)
> >
> >
> > The implementation

Re: mxnet Scala Convolution

2017-10-18 Thread TongKe Xue
My earlier question was a bit messy.

To rephrase my question:

1. Scala AlexNet sample code calls Symbol.Convolution:

https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/src/main/scala/ml/dmlc/mxnetexamples/visualization/AlexNet.scala#L30

2. Symbol.scala does not contain the string "Convolution"

https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/ml/dmlc/mxnet/Symbol.scala#L982

Question: where/how is Symbol.Convolution defined?

On Wed, Oct 18, 2017 at 4:10 PM, TongKe Xue  wrote:
> Hi,
>
> I am reading: https://mxnet.incubator.apache.org/api/scala/symbol.html
>
> I see Symbol.Variable, Symbol.Convolution
>
> When I look at Symbol.scala, I see Symbol.Variable at:
> https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/ml/dmlc/mxnet/Symbol.scala#L982
>
> However, I can't find where Convolution, SoftMax, FullyConnected, ...
> are defined.
>
> Where are these Symbols defined?
>
> (I have also tried: grep "Convolution" . -R | grep scala | grep def --
> but found nothing).
>
> Thanks,
> --TongKe


Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Lupesko, Hagay
Roshani – this is an exciting initiative, ONNX support on MXNet will enable 
more users to ramp up on MXNet, which is great.

Tianqi – a few questions and thoughts about your note:
- “More hardware backends to mxnet” – MXNet users get the same benefit of HW 
support implementing ONNX import on top of MXNet symbolic, right?
- “NNVM Compiler now received contributions from AWS, UW and many other folks 
in MXNet community.” – agreed it is ramping up, but when you look at the data, 
it is clear that it is very early on for NNVM. Looking at the repo, it has 
overall 223 commits, 0 releases. Compare it to MXNet with 6136 commits and 32 
releases. It seems to be still early on for NNVM, and for a more reliable 
initial implementation building the import on top of MXNet is easier, faster 
and safer. MXNet has lots of users already using the Symbolic API which 
hopefully mean that is a mature API that is not likely to have breaking changes 
or major issues.

I’m supportive option 1 proposed by Roshani (building serde on top of MXNet 
symbolic), but to do it as an encapsulated implementation detail, so the 
implementation can be migrated to NNVM or another implementation in the future, 
if at that point it seems like the right thing to do.

Interested in hearing other opinions though…

Hagay

On 10/18/17, 14:13, "Tianqi Chen"  wrote:

I am strongly recommending going through the nnvm/top. One major reason in
here is that the support of nnvm/top layer NOT ONLY mean compatibility of
model format with onnx. These are the major benefits:


- More hardware backends to mxnet, including opencl, metal, Raspberry Pi,
web browser. These things are automatically enabled by going through this
layer. In general, we design nnvm/tvm stack to resolve the challenge of
current mxnet's weakness in terms deploying to more hardware backends.

- More frontend capabilities, nnvm's gluon style IR ingests now from
CoreML, ONNX and in future keras. Supporting those will reduce the amount
of engineering effort needed.

- Future compatibility. We all agree that the future being migrated to
gluon's API. NNVM/top tries to look ahead by directly adopting the symbolic
API to be gluon.


I would also like to correct some of the mentioned facts with regard to
nnvm/tvm stack

1.   Nascent project with few contributors

NNVM Compiler now received contributions from AWS, UW and many other folks
in MXNet community. NNVM itself is already being used by MXNet.
MXNet's internal IR is migrating toward gluon, and its final form being
nnvm/top

3.   Does not support all operators that exist in MXNet Symbolic API

Neither NNVM/top or onnx support all operators that exist in mxnet symbolic
API. The end goal here is mainly to make nnvm/top onnx compatible, which is
a more reasonable goal.

4.  No CI Pipeline and testcases

NNVM already contains a compiler contains unittests and ci tested with
integration  https://github.com/dmlc/nnvm, with a CI pipline that is well
tested on CPU and GPU cases for front-ends.

Tianqi


On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote 
wrote:

> Hi guys,
>
>
> I am working on supporting ONNX  pre-trained
> models in Apache MXNet and would like to seek your opinion on the choice 
of
> implementation. I also have created a GitHub issue
> . Supporting ONNX
> in
> MXNet will enable users to move between frameworks with their models, this
> will also enable MXNet project to be a part of the ONNX open standard and
> steer the direction of ONNX.
>
>
> For those who don’t know ONNX, ONNX is an open source format for AI models
> which enables models to be transferred between frameworks. Refer to
> https://github.com/onnx/onnx for more details.
>
>
> To implement the import/export functionality in MXNet, I propose to expose
> a MXNet python module “serde”(name taken from Apache Hive project) with 
the
> following methods supporting different formats:
>
> sym, params = mxnet.serde.import(other_format_file, other_format=‘onnx’)
>
> other_format_file =  mxnet.serde.export(mxnet_sym, mxnet_params, ‘onnx’)
>
>
> The implementation under the hood can be done in two ways:
>
>
> 1) Implement at the MXNet layer by parsing the ONNX model(in protobuf
> format) and turn into MXNet Symbolic operators and build MXNet model
> directly. Similarly, I can convert the MXNet model to ONNX format at this
> layer.
>
>
> 2) The DMLC community has released the nnvm/tvm complier and an
> intermediate representation of the models, refer:
> http://www.tvmlang.org/2017/10/06/nnvm/tvm-compiler-announcement.html
> 

mxnet Scala Convolution

2017-10-18 Thread TongKe Xue
Hi,

I am reading: https://mxnet.incubator.apache.org/api/scala/symbol.html

I see Symbol.Variable, Symbol.Convolution

When I look at Symbol.scala, I see Symbol.Variable at:
https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/ml/dmlc/mxnet/Symbol.scala#L982

However, I can't find where Convolution, SoftMax, FullyConnected, ...
are defined.

Where are these Symbols defined?

(I have also tried: grep "Convolution" . -R | grep scala | grep def --
but found nothing).

Thanks,
--TongKe


Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Tianqi Chen
I am strongly recommending going through the nnvm/top. One major reason in
here is that the support of nnvm/top layer NOT ONLY mean compatibility of
model format with onnx. These are the major benefits:


- More hardware backends to mxnet, including opencl, metal, Raspberry Pi,
web browser. These things are automatically enabled by going through this
layer. In general, we design nnvm/tvm stack to resolve the challenge of
current mxnet's weakness in terms deploying to more hardware backends.

- More frontend capabilities, nnvm's gluon style IR ingests now from
CoreML, ONNX and in future keras. Supporting those will reduce the amount
of engineering effort needed.

- Future compatibility. We all agree that the future being migrated to
gluon's API. NNVM/top tries to look ahead by directly adopting the symbolic
API to be gluon.


I would also like to correct some of the mentioned facts with regard to
nnvm/tvm stack

1.   Nascent project with few contributors

NNVM Compiler now received contributions from AWS, UW and many other folks
in MXNet community. NNVM itself is already being used by MXNet.
MXNet's internal IR is migrating toward gluon, and its final form being
nnvm/top

3.   Does not support all operators that exist in MXNet Symbolic API

Neither NNVM/top or onnx support all operators that exist in mxnet symbolic
API. The end goal here is mainly to make nnvm/top onnx compatible, which is
a more reasonable goal.

4.  No CI Pipeline and testcases

NNVM already contains a compiler contains unittests and ci tested with
integration  https://github.com/dmlc/nnvm, with a CI pipline that is well
tested on CPU and GPU cases for front-ends.

Tianqi


On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote 
wrote:

> Hi guys,
>
>
> I am working on supporting ONNX  pre-trained
> models in Apache MXNet and would like to seek your opinion on the choice of
> implementation. I also have created a GitHub issue
> . Supporting ONNX
> in
> MXNet will enable users to move between frameworks with their models, this
> will also enable MXNet project to be a part of the ONNX open standard and
> steer the direction of ONNX.
>
>
> For those who don’t know ONNX, ONNX is an open source format for AI models
> which enables models to be transferred between frameworks. Refer to
> https://github.com/onnx/onnx for more details.
>
>
> To implement the import/export functionality in MXNet, I propose to expose
> a MXNet python module “serde”(name taken from Apache Hive project) with the
> following methods supporting different formats:
>
> sym, params = mxnet.serde.import(other_format_file, other_format=‘onnx’)
>
> other_format_file =  mxnet.serde.export(mxnet_sym, mxnet_params, ‘onnx’)
>
>
> The implementation under the hood can be done in two ways:
>
>
> 1) Implement at the MXNet layer by parsing the ONNX model(in protobuf
> format) and turn into MXNet Symbolic operators and build MXNet model
> directly. Similarly, I can convert the MXNet model to ONNX format at this
> layer.
>
>
> 2) The DMLC community has released the nnvm/tvm complier and an
> intermediate representation of the models, refer:
> http://www.tvmlang.org/2017/10/06/nnvm/tvm-compiler-announcement.html
> 
>
> Based on the conversation on the GitHub issue
>  I opened, Mu
> mentioned that MXNet would use nnvm/tvm as the backend in the future.
>
>
> We could hook into this layer to implement the import/export functionality.
> nnvm/tvm has ONNX 0.1 version import implemented.
>
> For import,
>
>1.
>
>I will need to enhance nnvm/tvm’s importer to support ONNX 0.2
>2.
>
>Implement nnvm/tvm->mxnet symbolic operators.
>
> For export:
>
>
>1.
>
>mxnet->nnvm/tvm ( nnvm/tvm provides this implementation already)
>2.
>
>I will need to Implement nnvm/tvm>onnx.
>
>
> These are the pros and cons I see in the above approaches:
>
>1.
>
>Import/export at mxnet layer
>
> Pros:
>
>1.
>
>Stable APIs currently used by users.
>2.
>
>Larger Apache MXNet community of contributors.
>3.
>
>CI pipeline to catch bugs.
>4.
>
>Comparatively less time to implement and put it in the hands of the
>users.
>
> Cons:
>
>1.
>
>In the future we may have to reimplement at the nnvm/tvm layer, in case
>MXNet moves to the nnvm/tvm backend(assuming it will move).
>
>
>
>1.
>
>Import/export at nnvm/tvm layer
>
> Pros:
>
>1.
>
>Less engineering work in case mxnet moves to nnvm/tvm
>2.
>
>nnvm/tvm would become a hub to convert to different formats.
>3.
>
>nnvm operators are more in parity with mxnet’s gluon APIs this could be
>useful in case Gluon becomes the only standard that MXNet will support.
>
> Cons:
>
>1.
>
>Nascent project with few contributors
>2.
>
>Does not support all operator

Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Dominic Divakaruni
very happy you are doing this Roshani!

On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote 
wrote:

> Hi guys,
>
>
> I am working on supporting ONNX  pre-trained
> models in Apache MXNet and would like to seek your opinion on the choice of
> implementation. I also have created a GitHub issue
> . Supporting ONNX
> in
> MXNet will enable users to move between frameworks with their models, this
> will also enable MXNet project to be a part of the ONNX open standard and
> steer the direction of ONNX.
>
>
> For those who don’t know ONNX, ONNX is an open source format for AI models
> which enables models to be transferred between frameworks. Refer to
> https://github.com/onnx/onnx for more details.
>
>
> To implement the import/export functionality in MXNet, I propose to expose
> a MXNet python module “serde”(name taken from Apache Hive project) with the
> following methods supporting different formats:
>
> sym, params = mxnet.serde.import(other_format_file, other_format=‘onnx’)
>
> other_format_file =  mxnet.serde.export(mxnet_sym, mxnet_params, ‘onnx’)
>
>
> The implementation under the hood can be done in two ways:
>
>
> 1) Implement at the MXNet layer by parsing the ONNX model(in protobuf
> format) and turn into MXNet Symbolic operators and build MXNet model
> directly. Similarly, I can convert the MXNet model to ONNX format at this
> layer.
>
>
> 2) The DMLC community has released the nnvm/tvm complier and an
> intermediate representation of the models, refer:
> http://www.tvmlang.org/2017/10/06/nnvm/tvm-compiler-announcement.html
> 
>
> Based on the conversation on the GitHub issue
>  I opened, Mu
> mentioned that MXNet would use nnvm/tvm as the backend in the future.
>
>
> We could hook into this layer to implement the import/export functionality.
> nnvm/tvm has ONNX 0.1 version import implemented.
>
> For import,
>
>1.
>
>I will need to enhance nnvm/tvm’s importer to support ONNX 0.2
>2.
>
>Implement nnvm/tvm->mxnet symbolic operators.
>
> For export:
>
>
>1.
>
>mxnet->nnvm/tvm ( nnvm/tvm provides this implementation already)
>2.
>
>I will need to Implement nnvm/tvm>onnx.
>
>
> These are the pros and cons I see in the above approaches:
>
>1.
>
>Import/export at mxnet layer
>
> Pros:
>
>1.
>
>Stable APIs currently used by users.
>2.
>
>Larger Apache MXNet community of contributors.
>3.
>
>CI pipeline to catch bugs.
>4.
>
>Comparatively less time to implement and put it in the hands of the
>users.
>
> Cons:
>
>1.
>
>In the future we may have to reimplement at the nnvm/tvm layer, in case
>MXNet moves to the nnvm/tvm backend(assuming it will move).
>
>
>
>1.
>
>Import/export at nnvm/tvm layer
>
> Pros:
>
>1.
>
>Less engineering work in case mxnet moves to nnvm/tvm
>2.
>
>nnvm/tvm would become a hub to convert to different formats.
>3.
>
>nnvm operators are more in parity with mxnet’s gluon APIs this could be
>useful in case Gluon becomes the only standard that MXNet will support.
>
> Cons:
>
>1.
>
>Nascent project with few contributors
>2.
>
>Does not support all operators that exist in MXNet Symbolic API
>3.
>
>No CI Pipeline
>4.
>
>Current Apache MXNet project does not use nnvm/tvm backend
>5.
>
>mxnet->nnvm/tvm backend needs more testing and user feedback.
>
>
> Any suggestions on both of these approaches? From user's perspective, this
> will be an implementation detail that is not exposed.
>
> Thanks,
>
> Roshani
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Roshani Nagmote
Hi guys,


I am working on supporting ONNX  pre-trained
models in Apache MXNet and would like to seek your opinion on the choice of
implementation. I also have created a GitHub issue
. Supporting ONNX in
MXNet will enable users to move between frameworks with their models, this
will also enable MXNet project to be a part of the ONNX open standard and
steer the direction of ONNX.


For those who don’t know ONNX, ONNX is an open source format for AI models
which enables models to be transferred between frameworks. Refer to
https://github.com/onnx/onnx for more details.


To implement the import/export functionality in MXNet, I propose to expose
a MXNet python module “serde”(name taken from Apache Hive project) with the
following methods supporting different formats:

sym, params = mxnet.serde.import(other_format_file, other_format=‘onnx’)

other_format_file =  mxnet.serde.export(mxnet_sym, mxnet_params, ‘onnx’)


The implementation under the hood can be done in two ways:


1) Implement at the MXNet layer by parsing the ONNX model(in protobuf
format) and turn into MXNet Symbolic operators and build MXNet model
directly. Similarly, I can convert the MXNet model to ONNX format at this
layer.


2) The DMLC community has released the nnvm/tvm complier and an
intermediate representation of the models, refer:
http://www.tvmlang.org/2017/10/06/nnvm/tvm-compiler-announcement.html


Based on the conversation on the GitHub issue
 I opened, Mu
mentioned that MXNet would use nnvm/tvm as the backend in the future.


We could hook into this layer to implement the import/export functionality.
nnvm/tvm has ONNX 0.1 version import implemented.

For import,

   1.

   I will need to enhance nnvm/tvm’s importer to support ONNX 0.2
   2.

   Implement nnvm/tvm->mxnet symbolic operators.

For export:


   1.

   mxnet->nnvm/tvm ( nnvm/tvm provides this implementation already)
   2.

   I will need to Implement nnvm/tvm>onnx.


These are the pros and cons I see in the above approaches:

   1.

   Import/export at mxnet layer

Pros:

   1.

   Stable APIs currently used by users.
   2.

   Larger Apache MXNet community of contributors.
   3.

   CI pipeline to catch bugs.
   4.

   Comparatively less time to implement and put it in the hands of the
   users.

Cons:

   1.

   In the future we may have to reimplement at the nnvm/tvm layer, in case
   MXNet moves to the nnvm/tvm backend(assuming it will move).



   1.

   Import/export at nnvm/tvm layer

Pros:

   1.

   Less engineering work in case mxnet moves to nnvm/tvm
   2.

   nnvm/tvm would become a hub to convert to different formats.
   3.

   nnvm operators are more in parity with mxnet’s gluon APIs this could be
   useful in case Gluon becomes the only standard that MXNet will support.

Cons:

   1.

   Nascent project with few contributors
   2.

   Does not support all operators that exist in MXNet Symbolic API
   3.

   No CI Pipeline
   4.

   Current Apache MXNet project does not use nnvm/tvm backend
   5.

   mxnet->nnvm/tvm backend needs more testing and user feedback.


Any suggestions on both of these approaches? From user's perspective, this
will be an implementation detail that is not exposed.

Thanks,

Roshani


[BUILD FAILED] Branch master build 543

2017-10-18 Thread Apache Jenkins Server
Build for MXNet branch master has broken. Please view the build at 
https://builds.apache.org/job/incubator-mxnet/job/master/543/

Re: disposing all ndarray in a given context

2017-10-18 Thread Joern Kottmann
Have a look at this code:
https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/ml/dmlc/mxnet/optimizer/AdaDelta.scala

There they have the same problem and use disposeDepsExcept to release resources.

Jörn

On Tue, Oct 17, 2017 at 4:18 PM, TongKe Xue  wrote:
> Following up to this:
>
> I see that the Scala API, when creating ndarray, uses:
>
> https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/ml/dmlc/mxnet/NDArray.scala#L114
>
> which calls
>
> https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/ml/dmlc/mxnet/LibInfo.scala#L42
>
> to get a "handle" from the given context.
>
>
> I've looked through the LibInfo.scala file -- and it's not clear to me
> if there is a way to:
>
> 1) nuke all handles in a Context OR
> 2) get a list of all handles in a Context (so I can manually call dispose)
>
> Is either of these things possible?
>
> Thanks!
>
>
> On Mon, Oct 16, 2017 at 4:15 PM, TongKe Xue  wrote:
>> Quoting: 
>> https://github.com/apache/incubator-mxnet/blob/master/scala-package/core/src/main/scala/ml/dmlc/mxnet/NDArray.scala#L545-L546
>>
>> * WARNING: it is your responsibility to clear this object through dispose().
>> * NEVER rely on the GC strategy
>>
>> Is there a way to say "dispose all ndarrays of this context" ?