Good point, Mu! I think this discussion could be taken one step further
into re-thinking how we version the components of MXNet. At the moment
everything is covered by one version, but this could bring the constraints
you mentioned. Another example is the Scala namespace change. We have to
hold on doing that change until we do a major version change - something
nobody here would like to do just because of a namespace change. Maybe we
could modularize these third party components and language bindings and
then version each of them separately to the core of MXNet.

Best regards,
Marco

Am 23.02.2018 6:54 nachm. schrieb "Li, Mu" <m...@amazon.com>:

A general concern is that if we want to include a package under active
developing into MXNet. I saw ONNX made a lot of progress these days, such
as control flow, while none of us participant into it. It worries me that
mxnet’s release may need to be correlated to oonx version. How do other
frameworks handle it? Caffe2 and PyTorch should be the two that support
onnx most well.

> On Feb 22, 2018, at 5:23 PM, Roshani Nagmote <roshaninagmo...@gmail.com>
wrote:
>
> Hi Marco,
>
> Good question. ONNX models come with a version number in the model
protobuf
> file. We can make use of that field when importing into MXNet.
>
> You can see the discussion and design of versioning policies in ONNX here:
> https://github.com/onnx/onnx/issues/119
>
> - Roshani
>
>
> On Thu, Feb 22, 2018 at 5:21 PM, Naveen Swamy <mnnav...@gmail.com> wrote:
>
>> If you train with a newer version of MXNet and try running on a older
>> version of MXNet, it might not already work today,  I am not sure if we
>> want to support such use-cases. This is tangential to this piece of work
>>
>> If ONNX were to update their version, I think the right place to keep
>> future versions of ONNX compatible should be in ONNX by providing a tool
to
>> move from ONNX.v0 to ONNX.v1. so that various framework converters always
>> move with the latest version of ONNX.
>>
>> ONNX models I believe already contains the ONNX version with which it was
>> built.
>>
>>
>> On Thu, Feb 22, 2018 at 4:38 PM, Marco de Abreu <
>> marco.g.ab...@googlemail.com> wrote:
>>
>>> Hello Roshani,
>>>
>>> interesting document and a good step towards allowing customers and
>>> developers to adopt MXNet faster.
>>>
>>> Just one quick question: How would your proposed design handle
>>> compatibility between old and new versions of MXNet as well as other
>>> frameworks? Since serde (import/export) is part of the MXNet source, we
>>> won't be able to update it independently. One example I'm thinking about
>> is
>>> training on the latest version of MXNet and running inference on an
older
>>> version. Could this cause issues since the ONNX model could be of a
>> higher
>>> version than the import on the old MXNet version is able to load? Would
>> it
>>> be necessary to have some kind of compatibility mode during the export
>>> process in which you define the target ONNX model version? There might
>> also
>>> be different operator versions etc.
>>>
>>> Best regards,
>>> Marco
>>>
>>>
>>>
>>> On Fri, Feb 23, 2018 at 1:15 AM, Roshani Nagmote <
>>> roshaninagmo...@gmail.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I wanted to follow up on the proposal I sent before.
>>>> https://cwiki.apache.org/confluence/display/MXNET/
>>>> Proposal%3A+ImportExport+
>>>> module
>>>>
>>>> It will be great if you can provide your feedback or suggestions.
>>>>
>>>> Thanks,
>>>> Roshani
>>>>
>>>> On Thu, Jan 18, 2018 at 4:47 PM, Roshani Nagmote <
>>>> roshaninagmo...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello all,
>>>>>
>>>>> I have written an initial design proposal for a `serde`(temporary
>> name)
>>>>> module for importing and exporting different model formats like onnx,
>>>>> coreml to and from MXNet.
>>>>>
>>>>> Please take a look and feel free to provide suggestions in the
>> comment
>>>>> section.
>>>>>
>>>>> https://cwiki.apache.org/confluence/display/MXNET/
>>>>> Proposal%3A+ImportExport+module
>>>>>
>>>>> Note: I will be traveling next week with limited access to emails.
>> So,
>>>>> responses might be delayed.
>>>>>
>>>>> Thanks,
>>>>> Roshani
>>>>>
>>>>
>>>
>>

Reply via email to