Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-31 Thread Samuel Audet
> We are looking for a robust solution for MXNet Java developers to use 
> especially owned and maintained by the Apache MXNet's community. I will be 
> more than happy to see if you would like to contribute the source code that 
> generate MXNet JavaCpp package to this repo. So we can own the maintainance 
> and responsible for the end users that the package is reliable.
> 
> At the beginning, we were discussing several ways that we can try to preserve 
> a low level Java API for MXNet that anyone who use Java can start with. Most 
> of the problems were lying under the ownership and maintainance part. I have 
> placed JavaCpp option to option 5 so we can see which one works the best in 
> the end.

Sounds good, thanks! If you have any specific concerns about the above, please 
let me know. JNA seems to be maintained by a single person with apparently no 
connections to the AI industry 
(https://dzone.com/articles/scratch-netbeans-itch-matthias) whereas I have to 
maintain anyway as part of my work APIs mainly for OpenCV, FFmpeg, ONNX 
Runtime, and TensorFlow at the moment, but others as well and it tends to vary 
with time, MXNet could become part of those eventually, and I have users paying 
for commercial support of proprietary libraries too, so I think JavaCPP is the 
better option here, but I'm obviously biased. :)

-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-667041968

Re: [apache/incubator-mxnet] [RFC] Double dependency for ONNX (#18824)

2020-07-31 Thread Sheng Zha
Can we drop onnx-tensorrt in favor of native tensorrt integration? It seems 
perfectly ok to run that outside of MXNet as it's more of an ONNX feature.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/18824#issuecomment-666989953

Re: [CI][v1.x] Update: unix-gpu timeouts

2020-07-30 Thread sandeep krishnamurthy
Thank you Chai.

On Thu, 30 Jul 2020, 6:35 pm Chaitanya Bapat,  wrote:

> Hello MXNet community,
>
> *If your PRs aren't targeted towards v1.* branches [v1.x for e.g.] then
> ignore.*
>
> For a few weeks, we've witnessed timeouts for CI unix-gpu pipelines
> specifically on PRs targeted towards v1.x branch. This was in connection
> with the unix-gpu pipeline for v1.x not running on the updated unix-gpu
> toolchain.
>
> Identifying this, I cherry-picked the unix-gpu toolchain update PR from
> master to v1.x: #18785
> . Refer original PR
> #18186  for details
> about what's included as part of the unix-gpu toolchain upgrade.
>
> The PR #18785 is currently waiting on a resolution by Apache Infra to
> prevent merging master branch code into v1.x which got introduced as part
> of enabling branch protection. Ticket: 20616
> 
>
> Once the PR gets merged, it should ease up the timeout issue.
>
> Sorry for inconvenience caused by timeouts. Thanks for the continued
> patience & incredible work contributing to MXNet project.
>
> Thanks,
> Chai
>
> --
> *Chaitanya Prakash Bapat*
> *+1 (973) 953-6299*
>
> [image: https://www.linkedin.com//in/chaibapat25]
> [image: https://www.facebook.com/chaibapat
> ]
> [image:
> https://twitter.com/ChaiBapchya] [image:
> https://www.linkedin.com//in/chaibapat25]
> 
>


Re: [Announcement] New Committer - Sam Skalicky

2020-07-29 Thread Aaron Markham
Alright Sam! Congratulations!

On Wed, Jul 29, 2020, 8:41 PM Kshitij Kalambarkar <
kshitijkalambar...@gmail.com> wrote:

> Congrats Sam! Keep up the great work.
>
> On Thu, 30 Jul, 2020, 8:49 AM Zhang Zhi,  wrote:
>
> > Congrats Sam!!
> >
> > -Zhi
> >
> > On Wed, Jul 29, 2020 at 7:30 PM Chen, Ciyong 
> > wrote:
> >
> > > Congratulations Sam!
> > > Those are great features to MXNet.
> > >
> > > -Original Message-
> > > From: Zhao, Patric 
> > > Sent: Thursday, July 30, 2020 8:40 AM
> > > To: dev@mxnet.incubator.apache.org
> > > Subject: RE: [Announcement] New Committer - Sam Skalicky
> > >
> > > Congratulations, Sam, thanks all of your great works in MXNet 😊
> > >
> > > > -Original Message-
> > > > From: Chaitanya Bapat 
> > > > Sent: Thursday, July 30, 2020 1:12 AM
> > > > To: dev@mxnet.incubator.apache.org
> > > > Subject: Re: [Announcement] New Committer - Sam Skalicky
> > > >
> > > > Congratulations Sam! Well deserved!
> > > >
> > > > On Wed, 29 Jul 2020 at 08:05, Marco de Abreu <
> marco.g.ab...@gmail.com>
> > > > wrote:
> > > >
> > > > > Welcome!
> > > > >
> > > > > -Marco
> > > > >
> > > > > On Wed, Jul 29, 2020, 4:58 PM sandeep krishnamurthy <
> > > > > sandeep.krishn...@gmail.com> wrote:
> > > > >
> > > > > > Hello all,
> > > > > >
> > > > > > Please join me in welcoming Sam Skalicky(@samskalicky) as a new
> > > > > > committer of Apache MXNet (incubating)!
> > > > > >
> > > > > > Sam has made a number of contributions to this project such as
> > > > > > SubGraphs, Custom Ops, Accelerator APIs, along with several other
> > > > > > operator implementations and bug fixes. Sam has been actively
> > > > > > engaging in PR reviews, dev@ list discussions and helping the
> > > > > > project and fellow contributors.
> > > > > >
> > > > > > Sam, thank you for all your contributions and looking forward to
> > > > > > more support!
> > > > > >
> > > > > > Welcome, Sam!
> > > > > >
> > > > > > --
> > > > > > Sandeep Krishnamurthy
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > *Chaitanya Prakash Bapat*
> > > > *+1 (973) 953-6299*
> > > >
> > > > [image: https://www.linkedin.com//in/chaibapat25]
> > > > <https://github.com/ChaiBapchya>[image:
> > > > https://www.facebook.com/chaibapat]
> > > > <https://www.facebook.com/chaibapchya>[image:
> > > > https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya
> > > >[image:
> > > > https://www.linkedin.com//in/chaibapat25]
> > > > <https://www.linkedin.com//in/chaibapchya/>
> > >
> >
>


Re: [Announcement] New Committer - Sam Skalicky

2020-07-29 Thread Kshitij Kalambarkar
Congrats Sam! Keep up the great work.

On Thu, 30 Jul, 2020, 8:49 AM Zhang Zhi,  wrote:

> Congrats Sam!!
>
> -Zhi
>
> On Wed, Jul 29, 2020 at 7:30 PM Chen, Ciyong 
> wrote:
>
> > Congratulations Sam!
> > Those are great features to MXNet.
> >
> > -Original Message-
> > From: Zhao, Patric 
> > Sent: Thursday, July 30, 2020 8:40 AM
> > To: dev@mxnet.incubator.apache.org
> > Subject: RE: [Announcement] New Committer - Sam Skalicky
> >
> > Congratulations, Sam, thanks all of your great works in MXNet 😊
> >
> > > -Original Message-
> > > From: Chaitanya Bapat 
> > > Sent: Thursday, July 30, 2020 1:12 AM
> > > To: dev@mxnet.incubator.apache.org
> > > Subject: Re: [Announcement] New Committer - Sam Skalicky
> > >
> > > Congratulations Sam! Well deserved!
> > >
> > > On Wed, 29 Jul 2020 at 08:05, Marco de Abreu 
> > > wrote:
> > >
> > > > Welcome!
> > > >
> > > > -Marco
> > > >
> > > > On Wed, Jul 29, 2020, 4:58 PM sandeep krishnamurthy <
> > > > sandeep.krishn...@gmail.com> wrote:
> > > >
> > > > > Hello all,
> > > > >
> > > > > Please join me in welcoming Sam Skalicky(@samskalicky) as a new
> > > > > committer of Apache MXNet (incubating)!
> > > > >
> > > > > Sam has made a number of contributions to this project such as
> > > > > SubGraphs, Custom Ops, Accelerator APIs, along with several other
> > > > > operator implementations and bug fixes. Sam has been actively
> > > > > engaging in PR reviews, dev@ list discussions and helping the
> > > > > project and fellow contributors.
> > > > >
> > > > > Sam, thank you for all your contributions and looking forward to
> > > > > more support!
> > > > >
> > > > > Welcome, Sam!
> > > > >
> > > > > --
> > > > > Sandeep Krishnamurthy
> > > > >
> > > >
> > >
> > >
> > > --
> > > *Chaitanya Prakash Bapat*
> > > *+1 (973) 953-6299*
> > >
> > > [image: https://www.linkedin.com//in/chaibapat25]
> > > <https://github.com/ChaiBapchya>[image:
> > > https://www.facebook.com/chaibapat]
> > > <https://www.facebook.com/chaibapchya>[image:
> > > https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya
> > >[image:
> > > https://www.linkedin.com//in/chaibapat25]
> > > <https://www.linkedin.com//in/chaibapchya/>
> >
>


Re: [Announcement] New Committer - Sam Skalicky

2020-07-29 Thread Zhang Zhi
Congrats Sam!!

-Zhi

On Wed, Jul 29, 2020 at 7:30 PM Chen, Ciyong  wrote:

> Congratulations Sam!
> Those are great features to MXNet.
>
> -Original Message-
> From: Zhao, Patric 
> Sent: Thursday, July 30, 2020 8:40 AM
> To: dev@mxnet.incubator.apache.org
> Subject: RE: [Announcement] New Committer - Sam Skalicky
>
> Congratulations, Sam, thanks all of your great works in MXNet 😊
>
> > -Original Message-
> > From: Chaitanya Bapat 
> > Sent: Thursday, July 30, 2020 1:12 AM
> > To: dev@mxnet.incubator.apache.org
> > Subject: Re: [Announcement] New Committer - Sam Skalicky
> >
> > Congratulations Sam! Well deserved!
> >
> > On Wed, 29 Jul 2020 at 08:05, Marco de Abreu 
> > wrote:
> >
> > > Welcome!
> > >
> > > -Marco
> > >
> > > On Wed, Jul 29, 2020, 4:58 PM sandeep krishnamurthy <
> > > sandeep.krishn...@gmail.com> wrote:
> > >
> > > > Hello all,
> > > >
> > > > Please join me in welcoming Sam Skalicky(@samskalicky) as a new
> > > > committer of Apache MXNet (incubating)!
> > > >
> > > > Sam has made a number of contributions to this project such as
> > > > SubGraphs, Custom Ops, Accelerator APIs, along with several other
> > > > operator implementations and bug fixes. Sam has been actively
> > > > engaging in PR reviews, dev@ list discussions and helping the
> > > > project and fellow contributors.
> > > >
> > > > Sam, thank you for all your contributions and looking forward to
> > > > more support!
> > > >
> > > > Welcome, Sam!
> > > >
> > > > --
> > > > Sandeep Krishnamurthy
> > > >
> > >
> >
> >
> > --
> > *Chaitanya Prakash Bapat*
> > *+1 (973) 953-6299*
> >
> > [image: https://www.linkedin.com//in/chaibapat25]
> > <https://github.com/ChaiBapchya>[image:
> > https://www.facebook.com/chaibapat]
> > <https://www.facebook.com/chaibapchya>[image:
> > https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya
> >[image:
> > https://www.linkedin.com//in/chaibapat25]
> > <https://www.linkedin.com//in/chaibapchya/>
>


RE: [Announcement] New Committer - Sam Skalicky

2020-07-29 Thread Chen, Ciyong
Congratulations Sam! 
Those are great features to MXNet.

-Original Message-
From: Zhao, Patric  
Sent: Thursday, July 30, 2020 8:40 AM
To: dev@mxnet.incubator.apache.org
Subject: RE: [Announcement] New Committer - Sam Skalicky

Congratulations, Sam, thanks all of your great works in MXNet 😊

> -Original Message-
> From: Chaitanya Bapat 
> Sent: Thursday, July 30, 2020 1:12 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [Announcement] New Committer - Sam Skalicky
> 
> Congratulations Sam! Well deserved!
> 
> On Wed, 29 Jul 2020 at 08:05, Marco de Abreu 
> wrote:
> 
> > Welcome!
> >
> > -Marco
> >
> > On Wed, Jul 29, 2020, 4:58 PM sandeep krishnamurthy < 
> > sandeep.krishn...@gmail.com> wrote:
> >
> > > Hello all,
> > >
> > > Please join me in welcoming Sam Skalicky(@samskalicky) as a new 
> > > committer of Apache MXNet (incubating)!
> > >
> > > Sam has made a number of contributions to this project such as 
> > > SubGraphs, Custom Ops, Accelerator APIs, along with several other 
> > > operator implementations and bug fixes. Sam has been actively 
> > > engaging in PR reviews, dev@ list discussions and helping the 
> > > project and fellow contributors.
> > >
> > > Sam, thank you for all your contributions and looking forward to 
> > > more support!
> > >
> > > Welcome, Sam!
> > >
> > > --
> > > Sandeep Krishnamurthy
> > >
> >
> 
> 
> --
> *Chaitanya Prakash Bapat*
> *+1 (973) 953-6299*
> 
> [image: https://www.linkedin.com//in/chaibapat25]
> <https://github.com/ChaiBapchya>[image:
> https://www.facebook.com/chaibapat]
> <https://www.facebook.com/chaibapchya>[image:
> https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya>[image:
> https://www.linkedin.com//in/chaibapat25]
> <https://www.linkedin.com//in/chaibapchya/>


RE: [Announcement] New Committer - Sam Skalicky

2020-07-29 Thread Zhao, Patric
Congratulations, Sam, thanks all of your great works in MXNet 😊

> -Original Message-
> From: Chaitanya Bapat 
> Sent: Thursday, July 30, 2020 1:12 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [Announcement] New Committer - Sam Skalicky
> 
> Congratulations Sam! Well deserved!
> 
> On Wed, 29 Jul 2020 at 08:05, Marco de Abreu 
> wrote:
> 
> > Welcome!
> >
> > -Marco
> >
> > On Wed, Jul 29, 2020, 4:58 PM sandeep krishnamurthy <
> > sandeep.krishn...@gmail.com> wrote:
> >
> > > Hello all,
> > >
> > > Please join me in welcoming Sam Skalicky(@samskalicky) as a new
> > > committer of Apache MXNet (incubating)!
> > >
> > > Sam has made a number of contributions to this project such as
> > > SubGraphs, Custom Ops, Accelerator APIs, along with several other
> > > operator implementations and bug fixes. Sam has been actively
> > > engaging in PR reviews, dev@ list discussions and helping the
> > > project and fellow contributors.
> > >
> > > Sam, thank you for all your contributions and looking forward to
> > > more support!
> > >
> > > Welcome, Sam!
> > >
> > > --
> > > Sandeep Krishnamurthy
> > >
> >
> 
> 
> --
> *Chaitanya Prakash Bapat*
> *+1 (973) 953-6299*
> 
> [image: https://www.linkedin.com//in/chaibapat25]
> <https://github.com/ChaiBapchya>[image:
> https://www.facebook.com/chaibapat]
> <https://www.facebook.com/chaibapchya>[image:
> https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya>[image:
> https://www.linkedin.com//in/chaibapat25]
> <https://www.linkedin.com//in/chaibapchya/>


Re: [apache/incubator-mxnet] [RFC] Double dependency for ONNX (#18824)

2020-07-29 Thread Chaitanya Prakash Bapat
Thanks for creating the RFC. For readers to understand this issue better, could 
you substantiate with the PR example of how onnx_tensorrt updates can break the 
MX-ONNX import/export functionalities?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/18824#issuecomment-665983516

Re: [Announcement] New Committer - Sam Skalicky

2020-07-29 Thread Chaitanya Bapat
Congratulations Sam! Well deserved!

On Wed, 29 Jul 2020 at 08:05, Marco de Abreu 
wrote:

> Welcome!
>
> -Marco
>
> On Wed, Jul 29, 2020, 4:58 PM sandeep krishnamurthy <
> sandeep.krishn...@gmail.com> wrote:
>
> > Hello all,
> >
> > Please join me in welcoming Sam Skalicky(@samskalicky) as a new committer
> > of Apache MXNet (incubating)!
> >
> > Sam has made a number of contributions to this project such as SubGraphs,
> > Custom Ops, Accelerator APIs, along with several other operator
> > implementations and bug fixes. Sam has been actively engaging in PR
> > reviews, dev@ list discussions and helping the project and fellow
> > contributors.
> >
> > Sam, thank you for all your contributions and looking forward to more
> > support!
> >
> > Welcome, Sam!
> >
> > --
> > Sandeep Krishnamurthy
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-29 Thread Yuan Tang
This is great discussion. Thanks @lanking520 for initiating this. Perhaps we 
can define some key metrics here so we can compare the solutions later? 

-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-665775006

Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-29 Thread Lanking
@saudet Thanks for your reply. Still, I am concerned about the first question:

you mentioned:
> We can go either way, but I found that for contemporary projects like 
> Deeplearning4j, MXNet, PyTorch, or TensorFlow that > need to develop 
> high-level APIs on top of something like JavaCPP prefer to have control over 
> everything in their own
> repositories, and use JavaCPP pretty much like we would use cython or 
> pybind11 with setuptools for Python.

We are looking for a robust solution for MXNet Java developers to use 
especially owned and maintained by the Apache MXNet's community. I will be more 
than happy to see if you would like to contribute the source code that generate 
MXNet JavaCpp package to this repo. So we can own the maintainance and 
responsible for the end users that the package is reliable.

At the beginning, we were discussing several ways that we can try to preserve a 
low level Java API for MXNet that anyone who use Java can start with. Most of 
the problems were lying under the ownership and maintainance part. I have 
placed JavaCpp option to option 5 so we can see which one works the best in the 
end.

-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-665771813

Re: [Announcement] New Committer - Sam Skalicky

2020-07-29 Thread Marco de Abreu
Welcome!

-Marco

On Wed, Jul 29, 2020, 4:58 PM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> Hello all,
>
> Please join me in welcoming Sam Skalicky(@samskalicky) as a new committer
> of Apache MXNet (incubating)!
>
> Sam has made a number of contributions to this project such as SubGraphs,
> Custom Ops, Accelerator APIs, along with several other operator
> implementations and bug fixes. Sam has been actively engaging in PR
> reviews, dev@ list discussions and helping the project and fellow
> contributors.
>
> Sam, thank you for all your contributions and looking forward to more
> support!
>
> Welcome, Sam!
>
> --
> Sandeep Krishnamurthy
>


Re: assimilation of mshadow into the MXNet codebase

2020-07-27 Thread Justin Mclean
Hi,

> Here's the second update. At the moment we are only missing ICLAs from 20 
> (out of 70) contributors, accounting for 31 (out of 913) commits left.

IMO That's still a significant number of commits and people.

> Regarding whether the contributors are employed by a company that requires 
> CCLA for contribution, I have no way of verifying the contributors' 
> employment status at the time of contribution, and not enough bandwidth to 
> verify the individual company policies on such contribution. As such, I will 
> solely rely on the ICLAs.

The risk is that the IPMC may require more than that, I can’t predict how other 
IPMC members will vote in this case. Intel have supplied CCLAs in the past and 
there are people on that list not covered by a CCLA could be a concern along 
with the missing ICLAs.

Thanks,
Justin

Re: assimilation of mshadow into the MXNet codebase

2020-07-27 Thread Sheng Zha
Hi,

Here's the second update. At the moment we are only missing ICLAs from 20
(out of 70) contributors, accounting for 31 (out of 913) commits left.

3 @zhenlinluo
3 @jpauwels
3 @hjk41
3 @DrustZ
2 @zhangchen-qinyinghua
2 @yinghu5
2 @reyoung
1 @xinyu-intel
1 @xingmingjie
1 @qiaohaijun
1 @loveisp
1 @lebeg
1 @kaleidoscopical
1 @jason-xuan
1 @happynear
1 @glingyan
1 @asitstands
1 @antoine-wdg-rmz
1 @alextnewman
1 @Harmonicahappy

Regarding whether the contributors are employed by a company that requires
CCLA for contribution, I have no way of verifying the contributors'
employment status at the time of contribution, and not enough bandwidth to
verify the individual company policies on such contribution. As such, I
will solely rely on the ICLAs.

Given the current status, I think the rest of the 31 commits is manageable
for me even if I end up having to revert and rework all of them. Let me
know if you have any concern on starting the IP clearance process.
Otherwise I think we can start it on general@incubator soon.

Cheers,
Sheng

On Sun, Jul 26, 2020 at 8:59 PM Justin Mclean 
wrote:

> Hi,
>
> In the case of Intel and other companies, it may be that their employee
> contracts do not allow employees to contribute to OS projects. It more
> likely that the contributor doesn’t own copyright of the code but their
> employer does. A CCLA give a clear indication that the contributors are
> intact allowed to contrive code and own the copyright of their
> contributions. We have CCLAs from Intel on file from other contributions so
> it would seem that Intel requires this.
>
> Thanks,
> Justin


Re: [apache/incubator-mxnet] [RFC] v1.8.0 release (#18800)

2020-07-27 Thread Jake Lee
I would like to include the [BatchNorm performance improvement 
PR](https://github.com/apache/incubator-mxnet/pull/18676) for axis != 1 to 1.8

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/18800#issuecomment-664642323

Re: [apache/incubator-mxnet] [RFC] v1.8.0 release (#18800)

2020-07-27 Thread Serge Panev
These PR related to Partition API changes for the Gluon support could be also 
added: 

- Partition API adding and deleting new params to Block and Symbol  
https://github.com/apache/incubator-mxnet/pull/18405
- Add backward Type inference to main NN operators 
https://github.com/apache/incubator-mxnet/pull/18378
- Add better partial args/aux handling in symbol optimize_for 
https://github.com/apache/incubator-mxnet/pull/18350
- Fix FInferShape for some ops to support partial type inference 
https://github.com/apache/incubator-mxnet/pull/18348
- Change include to relative in nvvm_to_onnx.cc 
https://github.com/apache/incubator-mxnet/pull/18249

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/18800#issuecomment-664634047

Re: assimilation of mshadow into the MXNet codebase

2020-07-26 Thread Justin Mclean
Hi,

In the case of Intel and other companies, it may be that their employee 
contracts do not allow employees to contribute to OS projects. It more likely 
that the contributor doesn’t own copyright of the code but their employer does. 
A CCLA give a clear indication that the contributors are intact allowed to 
contrive code and own the copyright of their contributions. We have CCLAs from 
Intel on file from other contributions so it would seem that Intel requires 
this.

Thanks,
Justin

Re: Distribution of release candidates

2020-07-26 Thread Sheng Zha
Hi Justin,

Please excuse the delay. I replied on general@ [1]. I suggest that we
continue the discussion in that thread and not fork the discussion.

Regards,
Sheng

[1]
https://lists.apache.org/thread.html/r149bb0721c40f145d3a6a9c847f0172cffa5ec78f84d1ee775b4103c%40%3Cgeneral.incubator.apache.org%3E

On Sun, Jul 26, 2020 at 7:43 PM Justin Mclean  wrote:

> Hi,
>
> I asked on the incubator vote but didn't get a reply.
>
> Can the PPMC please explain why release candidates are being released in
> this way, in particular by companies who employee MXnet PPMC members.
>
> What steps will the PPMC will take to stop this from happening in the
> future?
>
> Thanks,
> Justin
>
>


Re: assimilation of mshadow into the MXNet codebase

2020-07-26 Thread Sheng Zha
Justin,

Are you OK with proceeding?

Regards,
Sheng

On Sun, Jul 26, 2020 at 8:30 PM Tianqi Chen 
wrote:

> As long as we have CLA covering for the majority of the code(which I
> believe so), I think we should be good.
> Just like the case of Apache only requires iCLA from committers.
>
> The rationale is that normal contributions are already in the form of ALv2,
> in the case of a(unlikely) dispute, the community can quickly rewrite the
> code(since that is non-majority).
>
> TQ
>
> On Sun, Jul 26, 2020 at 5:49 PM Sheng Zha  wrote:
>
> > Hi Justin,
> >
> > Thanks, that's a good point. I think we have already received CCLA from
> > Intel. I will take that into account when providing the next update.
> >
> > Regards,
> > Sheng
> >
> > On Sun, Jul 26, 2020 at 5:39 PM Justin Mclean 
> > wrote:
> >
> > > Hi,
> > >
> > > > Several peoples in below list are from Intel and I have added them
> into
> > > CC.
> > >
> > > Has Intel signed a CCLA? And if so does it list people who are allowed
> to
> > > contribute to this project? Are there any others on that list who
> > > employer’s may need to also sign CCLAs if we don’t have them?
> > >
> > > Thanks,
> > > Justin
> >
>


Re: assimilation of mshadow into the MXNet codebase

2020-07-26 Thread Tianqi Chen
As long as we have CLA covering for the majority of the code(which I
believe so), I think we should be good.
Just like the case of Apache only requires iCLA from committers.

The rationale is that normal contributions are already in the form of ALv2,
in the case of a(unlikely) dispute, the community can quickly rewrite the
code(since that is non-majority).

TQ

On Sun, Jul 26, 2020 at 5:49 PM Sheng Zha  wrote:

> Hi Justin,
>
> Thanks, that's a good point. I think we have already received CCLA from
> Intel. I will take that into account when providing the next update.
>
> Regards,
> Sheng
>
> On Sun, Jul 26, 2020 at 5:39 PM Justin Mclean 
> wrote:
>
> > Hi,
> >
> > > Several peoples in below list are from Intel and I have added them into
> > CC.
> >
> > Has Intel signed a CCLA? And if so does it list people who are allowed to
> > contribute to this project? Are there any others on that list who
> > employer’s may need to also sign CCLAs if we don’t have them?
> >
> > Thanks,
> > Justin
>


Re: assimilation of mshadow into the MXNet codebase

2020-07-26 Thread Sheng Zha
Hi Justin,

Thanks, that's a good point. I think we have already received CCLA from
Intel. I will take that into account when providing the next update.

Regards,
Sheng

On Sun, Jul 26, 2020 at 5:39 PM Justin Mclean 
wrote:

> Hi,
>
> > Several peoples in below list are from Intel and I have added them into
> CC.
>
> Has Intel signed a CCLA? And if so does it list people who are allowed to
> contribute to this project? Are there any others on that list who
> employer’s may need to also sign CCLAs if we don’t have them?
>
> Thanks,
> Justin


Re: assimilation of mshadow into the MXNet codebase

2020-07-26 Thread Justin Mclean
Hi,

> Several peoples in below list are from Intel and I have added them into CC.

Has Intel signed a CCLA? And if so does it list people who are allowed to 
contribute to this project? Are there any others on that list who employer’s 
may need to also sign CCLAs if we don’t have them?

Thanks,
Justin

RE: assimilation of mshadow into the MXNet codebase

2020-07-26 Thread Zhao, Patric
Several peoples in below list are from Intel and I have added them into CC.

Sheng, you can contact with them for ICLA.

Thanks,

--Patric

> -Original Message-
> From: Sheng Zha 
> Sent: Monday, July 27, 2020 5:33 AM
> To: Justin Mclean 
> Cc: d...@mxnet.apache.org; Wall Michael ; Bob Paulin
> ; wei...@apache.org; jason...@apache.org; Chen, Ciyong
> 
> Subject: Re: assimilation of mshadow into the MXNet codebase
> 
> Hi,
> 
> Here's an update on this issue. We are still missing the ICLAs from 32 (out 
> of 70)
> mshadow contributors, accounting for a total of 62 (out of 913) commits. (@ap-
> hynninen passed away a few years ago and is not included). I reached out to
> them through email and other channels to collect ICLA for mshadow. I will wait
> for a day or two before updating on the progress again, and we can decide then
> whether we are good to start the IP clearance.
> 
> The complete list of mshadow contributors' GitHub logins that are missing ICLA
> is here ("#commits @github-login"):
> 
> 8 @Lorrainexun
> 6 @tornadomeet
> 5 @asmushetzel
> 3 @zhenlinluo
> 3 @stefanhenneking
> 3 @jpauwels
> 3 @hjk41
> 3 @DrustZ
> 2 @zhangchen-qinyinghua
> 2 @yinghu5
> 2 @reyoung
> 2 @forwchen
> 1 @yupbank
> 1 @yllan
> 1 @xinyu-intel
> 1 @xingmingjie
> 1 @xianyi
> 1 @tdomhan
> 1 @siemanko
> 1 @qiaohaijun
> 1 @maxint
> 1 @loveisp
> 1 @lebeg
> 1 @kdavis-mozilla
> 1 @kaleidoscopical
> 1 @jason-xuan
> 1 @happynear
> 1 @glingyan
> 1 @asitstands
> 1 @antoine-wdg-rmz
> 1 @alextnewman
> 1 @Harmonicahappy
> 
> Best,
> Sheng
> 
> On Thu, Jul 23, 2020 at 12:28 AM Sheng Zha  wrote:
> 
> > Hi,
> >
> > No, I don’t think we used ICLAs for mshadow before.
> >
> > Out of the 42 people who made more than 1 commit or more than 10 lines
> > of code change to mshadow, 26 signed ICLA with Apache (and
> > additionally one member is unfortunately deceased...). Would this be a
> > better criteria as “the major ones”? I wasn’t part of the initial code
> > donation or the initial PPMC group, so apologies if the questions were 
> > silly.
> >
> > I think the rest of the commits are manageable so that I could do a
> > revert and rework for those commits if/when necessary.
> >
> > Regards,
> > Sheng
> >
> > > On Jul 22, 2020, at 11:50 PM, Justin Mclean
> > > 
> > wrote:
> > >
> > > ï»żHi,
> > >
> > >> Thanks for clarifying. All contributors who made more than 10
> > >> commits
> > to msahdow before are committers of MXNet, so their ICLAs should
> > already be on file: tqchen, bingxu, eric.xie, sxjscience, mli,
> > yajiedesign [1]. If you think this is OK, one of the mentors or I can start 
> > the
> notification.
> > >
> > >
> > > What about the other 60 contributors? More than 10 commits is not a
> > > line
> > I would feel comfortable with. You need to be able to account for the
> > IP provenance of every line of code, just like in your initial code 
> > donation.
> > It would probably be best to make a list all contributors and if they
> > have an ICLA or not. Did the mshadow project use ICLAs? If so that may also
> help.
> > >
> > > Thanks,
> > > Justin
> >


Re: assimilation of mshadow into the MXNet codebase

2020-07-26 Thread Sheng Zha
Hi,

Here's an update on this issue. We are still missing the ICLAs from 32 (out
of 70) mshadow contributors, accounting for a total of 62 (out of 913)
commits. (@ap-hynninen passed away a few years ago and is not included). I
reached out to them through email and other channels to collect ICLA for
mshadow. I will wait for a day or two before updating on the progress
again, and we can decide then whether we are good to start the IP clearance.

The complete list of mshadow contributors' GitHub logins that are missing
ICLA is here ("#commits @github-login"):

8 @Lorrainexun
6 @tornadomeet
5 @asmushetzel
3 @zhenlinluo
3 @stefanhenneking
3 @jpauwels
3 @hjk41
3 @DrustZ
2 @zhangchen-qinyinghua
2 @yinghu5
2 @reyoung
2 @forwchen
1 @yupbank
1 @yllan
1 @xinyu-intel
1 @xingmingjie
1 @xianyi
1 @tdomhan
1 @siemanko
1 @qiaohaijun
1 @maxint
1 @loveisp
1 @lebeg
1 @kdavis-mozilla
1 @kaleidoscopical
1 @jason-xuan
1 @happynear
1 @glingyan
1 @asitstands
1 @antoine-wdg-rmz
1 @alextnewman
1 @Harmonicahappy

Best,
Sheng

On Thu, Jul 23, 2020 at 12:28 AM Sheng Zha  wrote:

> Hi,
>
> No, I don’t think we used ICLAs for mshadow before.
>
> Out of the 42 people who made more than 1 commit or more than 10 lines of
> code change to mshadow, 26 signed ICLA with Apache (and additionally one
> member is unfortunately deceased...). Would this be a better criteria as
> “the major ones”? I wasn’t part of the initial code donation or the initial
> PPMC group, so apologies if the questions were silly.
>
> I think the rest of the commits are manageable so that I could do a revert
> and rework for those commits if/when necessary.
>
> Regards,
> Sheng
>
> > On Jul 22, 2020, at 11:50 PM, Justin Mclean 
> wrote:
> >
> > ï»żHi,
> >
> >> Thanks for clarifying. All contributors who made more than 10 commits
> to msahdow before are committers of MXNet, so their ICLAs should already be
> on file: tqchen, bingxu, eric.xie, sxjscience, mli, yajiedesign [1]. If you
> think this is OK, one of the mentors or I can start the notification.
> >
> >
> > What about the other 60 contributors? More than 10 commits is not a line
> I would feel comfortable with. You need to be able to account for the IP
> provenance of every line of code, just like in your initial code donation.
> It would probably be best to make a list all contributors and if they have
> an ICLA or not. Did the mshadow project use ICLAs? If so that may also help.
> >
> > Thanks,
> > Justin
>


Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-25 Thread Samuel Audet
> ## What's missing
> 
> javacpp-presets-mxnet doesn't expose APIs form nnvm/c_api.h (some of current 
> python/gluon API depends on APIs in nnvm/c_api.h)

I've added that the other day, thanks to @frankfliu for pointing this out: 
https://github.com/bytedeco/javacpp-presets/commit/976e6f7d307b3f3855f39413c494d8f482c9adf6

> See javadoc: http://bytedeco.org/javacpp-presets/mxnet/apidocs/
> 
> 1. Java class name is “mxnet”, which is not following java naming conventions

That's not hardcoded. We can use whatever name we want for that class.

> 2. Each pointer has a corresponding java class, which is arguable. It's 
> necessary to expose them as strong type class if they meant to be used 
> directly by end developer. But they really should only be internal 
> implementation of the API. It's overkill to expose them as a Type instead of 
> just a pointer.

We can map everything to `Pointer`, that's not a problem either.

> 3. All the classes (except mxnet.java) are hand written.

No, they are not. Everything in the `src/gen` directory here is generated at 
build time:
https://github.com/bytedeco/javacpp-presets/tree/master/mxnet/src/gen/java/org/bytedeco/mxnet

> 4. API mapping are hand coded as well.

If you're talking about this file, yes, that's the only thing that is written 
manually:
https://github.com/bytedeco/javacpp-presets/blob/master/mxnet/src/main/java/org/bytedeco/mxnet/presets/mxnet.java
(The formatting is a bit crappy, I haven't touched it in a while, but we can 
make it look prettier like this:
https://github.com/bytedeco/javacpp-presets/blob/master/onnxruntime/src/main/java/org/bytedeco/onnxruntime/presets/onnxruntime.java
 )

> ## Performance
> 
> JavaCPP native library load takes a long time, it takes average _2.6 seconds_ 
> to initialize libmxnet.so with javacpp.
> 
> Loader.load(org.bytedeco.mxnet.global.mxnet.class);

Something's wrong, that takes less than 500 ms on my laptop, and that includes 
loading OpenBLAS, OpenCV, and a lookup for CUDA and MKL, which can obviously be 
optimized... In any case, we can debug that later to see what is going wrong on 
your end.

> ## Issues
> 
> The open source code on github doesn't match the binary release on maven 
> central:
> 
> * the maven group and the java package name are different.

Both the group ID and the package names are `org.bytedeco`, but in any case, if 
that gets maintained somewhere here, I imagine it would be changed to something 
like `org.apache.mxnet.xyz.internal.etc`

> * c predict API is not included in maven version

Yes it is: 
http://bytedeco.org/javacpp-presets/mxnet/apidocs/org/bytedeco/mxnet/global/mxnet.html
 
> * Example code doesn't work with maven artifacts, it can only build with 
> snapshot version locally.

https://github.com/bytedeco/javacpp-presets/tree/master/mxnet/samples works 
fine for me on Linux:
```
$ mvn -U clean compile exec:java -Djavacpp.platform.custom 
-Djavacpp.platform.host -Dexec.args=apple.jpg
...
Downloading from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/mxnet-platform/1.7.0.rc1-1.5.4-SNAPSHOT/maven-metadata.xml
Downloaded from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/mxnet-platform/1.7.0.rc1-1.5.4-SNAPSHOT/maven-metadata.xml
 (1.3 kB at 2.5 kB/s)
Downloading from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/mxnet-platform/1.7.0.rc1-1.5.4-SNAPSHOT/mxnet-platform-1.7.0.rc1-1.5.4-20200725.115300-20.pom
Downloaded from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/mxnet-platform/1.7.0.rc1-1.5.4-SNAPSHOT/mxnet-platform-1.7.0.rc1-1.5.4-20200725.115300-20.pom
 (4.7 kB at 9.3 kB/s)
Downloading from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp-presets/1.5.4-SNAPSHOT/maven-metadata.xml
Downloaded from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp-presets/1.5.4-SNAPSHOT/maven-metadata.xml
 (610 B at 1.5 kB/s)
Downloading from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp-presets/1.5.4-SNAPSHOT/javacpp-presets-1.5.4-20200725.155410-6590.pom
Downloaded from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/javacpp-presets/1.5.4-SNAPSHOT/javacpp-presets-1.5.4-20200725.155410-6590.pom
 (84 kB at 91 kB/s)
Downloading from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/opencv-platform/4.4.0-1.5.4-SNAPSHOT/maven-metadata.xml
Downloaded from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/opencv-platform/4.4.0-1.5.4-SNAPSHOT/maven-metadata.xml
 (1.2 kB at 2.6 kB/s)
Downloading from sonatype-nexus-snapshots: 
https://oss.sonatype.org/content/repositories/snapshots/org/bytedeco/opencv-platform/

Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-25 Thread Samuel Audet
> @saudet Thanks for your proposal. I have four questions would like to ask you:
> 
> 1. If we adopt JavaCpp package, how will that be consumed? Under byteco or 
> apache MXNet? Essentially from our previous discussion, we really don't want 
> another 3rdparty checkin.

We can go either way, but I found that projects like MXNet or TensorFlow that 
need to develop high-level APIs on top of something like JavaCPP prefer to have 
control over everything in their own repositories, and use JavaCPP pretty much 
like we would use pybind and pip for Python.

I started the JavaCPP Presets because for projects such as OpenCV, FFmpeg, 
LLVM, etc, high-level APIs for other languages than C/C++ are not being 
developed as part of those projects. I also realized the Java community needed 
something like Anaconda...

> 2. Can you also do a benchmark on the MXNet's API's performance and possibly 
> share the reproducible code? We did test the performance on JavaCpp vs JNA vs 
> JNI and didn't see much difference on performance (under 10%).
> 
> 
> * MXImperativeInvokeEx
> 
> * CachedOpForward
> 
> 
> The above two methods are most frequently used methods in order to do minimum 
> inference request, please try on these two to see how performance goes.
> 

If you're doing only batch operations, as would be the case for Python 
bindings, you're not going to see much difference, no. What you need to look at 
are things like the Indexer package, which allows us to implement fast custom 
operations in Java like this: http://bytedeco.org/news/2014/12/23/third-release/
You're not going to be able to do that with JNA or JNI without essentially 
recoding that kind of thing.

> 3. We do have some additional technical issue with JavaCpp, is there any plan 
> to fix it? (I will put it into a separate comment since it is really big.
> 
> 4. How do you ensure the performance if the build flag is different? Like the 
> mxnet has to build from source (with necessary modification on source code) 
> in order to work along with javacpp
> 
> 5. regarding to the dependencies issue, can we go without additional opencv 
> and openblas in the package?

Yes, that's the kind of issues that would be best dealt with by using only 
JavaCPP as a low-level tool, instead of the presets, which is basically a 
high-level distribution like Anaconda.

-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-663916338

RE: assimilation of mshadow into the MXNet codebase

2020-07-23 Thread Chen, Ciyong
Hi Justin,

We're doing 1.7.0 source release recently, and the vote on dev@ was passed, 
vote thread [1], result thread[2].
Seems the discussion on mshadow donation is still not finalized, may I know if 
you have any concern to proceed the current release under DISCLAIMER-WIP? 

Thanks,
-Ciyong

[1] 
https://lists.apache.org/thread.html/r525a961a10f69bdfb255c64f0be0589bb70efdd880c1be87c81c0c06%40%3Cdev.mxnet.apache.org%3E
[2] 
https://lists.apache.org/thread.html/rbd53614ca01f714d00097a02d906895211336a14ce0e083865cf5144%40%3Cdev.mxnet.apache.org%3E


-Original Message-
From: Sheng Zha  
Sent: Thursday, July 23, 2020 3:29 PM
To: Justin Mclean 
Cc: d...@mxnet.apache.org; Wall Michael ; Bob Paulin 
; wei...@apache.org; jason...@apache.org
Subject: Re: assimilation of mshadow into the MXNet codebase

Hi,

No, I don’t think we used ICLAs for mshadow before.

Out of the 42 people who made more than 1 commit or more than 10 lines of code 
change to mshadow, 26 signed ICLA with Apache (and additionally one member is 
unfortunately deceased...). Would this be a better criteria as “the major 
ones”? I wasn’t part of the initial code donation or the initial PPMC group, so 
apologies if the questions were silly.

I think the rest of the commits are manageable so that I could do a revert and 
rework for those commits if/when necessary.

Regards,
Sheng

> On Jul 22, 2020, at 11:50 PM, Justin Mclean  wrote:
> 
> ï»żHi,
> 
>> Thanks for clarifying. All contributors who made more than 10 commits to 
>> msahdow before are committers of MXNet, so their ICLAs should already be on 
>> file: tqchen, bingxu, eric.xie, sxjscience, mli, yajiedesign [1]. If you 
>> think this is OK, one of the mentors or I can start the notification.
> 
> 
> What about the other 60 contributors? More than 10 commits is not a line I 
> would feel comfortable with. You need to be able to account for the IP 
> provenance of every line of code, just like in your initial code donation. It 
> would probably be best to make a list all contributors and if they have an 
> ICLA or not. Did the mshadow project use ICLAs? If so that may also help.
> 
> Thanks,
> Justin


Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-23 Thread Lanking
## What's inside of javacpp-presets-mxnet

* Native shared libraries:
* libmxnet.so
* libjnimxnet.so
* libmkldnn.0.so
* MXNet scala and java classes
* javacpp-presets-mxnet java API implemenations
* javacpp generated native bindings
* mxnet C_API
* mxnet-predict C_API

## What's missing

javacpp-presets-mxnet doesn't expose APIs  form nnvm/c_api.h (some of current 
python/gluon API depends on APIs in nnvm/c_api.h)


## What's the dependencies
```
org.bytedeco.mxnet:ImageClassificationPredict:jar:1.5-SNAPSHOT
+- org.bytedeco:mxnet-platform:jar:1.4.0-1.5-SNAPSHOT:compile
|  +- org.bytedeco:opencv-platform:jar:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:android-arm:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:android-arm64:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:android-x86:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:android-x86_64:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:ios-arm64:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:ios-x86_64:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:linux-x86:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:linux-x86_64:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:linux-armhf:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:linux-ppc64le:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:macosx-x86_64:4.0.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:opencv:jar:windows-x86:4.0.1-1.5-SNAPSHOT:compile
|  |  \- org.bytedeco:opencv:jar:windows-x86_64:4.0.1-1.5-SNAPSHOT:compile
|  +- org.bytedeco:openblas-platform:jar:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:android-arm:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:android-arm64:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:android-x86:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:android-x86_64:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:ios-arm64:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:ios-x86_64:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:linux-x86:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:linux-x86_64:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:linux-armhf:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:linux-ppc64le:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:macosx-x86_64:0.3.5-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:openblas:jar:windows-x86:0.3.5-1.5-SNAPSHOT:compile
|  |  \- org.bytedeco:openblas:jar:windows-x86_64:0.3.5-1.5-SNAPSHOT:compile
|  +- org.bytedeco:mkl-dnn-platform:jar:0.18.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:mkl-dnn:jar:linux-x86_64:0.18.1-1.5-SNAPSHOT:compile
|  |  +- org.bytedeco:mkl-dnn:jar:macosx-x86_64:0.18.1-1.5-SNAPSHOT:compile
|  |  \- org.bytedeco:mkl-dnn:jar:windows-x86_64:0.18.1-1.5-SNAPSHOT:compile
|  \- org.bytedeco:mxnet:jar:1.4.0-1.5-SNAPSHOT:compile
\- org.bytedeco:mxnet:jar:macosx-x86_64:1.4.0-1.5-SNAPSHOT:compile
   +- org.bytedeco:opencv:jar:4.0.1-1.5-SNAPSHOT:compile
   +- org.bytedeco:openblas:jar:0.3.5-1.5-SNAPSHOT:compile
   +- org.bytedeco:mkl-dnn:jar:0.18.1-1.5-SNAPSHOT:compile
   +- org.bytedeco:javacpp:jar:1.5-SNAPSHOT:compile
   +- org.slf4j:slf4j-simple:jar:1.7.25:compile
   |  \- org.slf4j:slf4j-api:jar:1.7.25:compile
   \- org.scala-lang:scala-library:jar:2.11.12:compile
```


## Build the project form source

I spent 40 min to build the project on my mac, and has to make some hack to 
build it.

* It downloads mxnet source code, and making some hack around the source code
* It uses it's own set of compiler flags to build libmxnet.so
* It also build MXNet Scala project.

Classes

See javadoc:  http://bytedeco.org/javacpp-presets/mxnet/apidocs/


1. Java class name is “mxnet”, which is not following java naming conventions
2. Each pointer has a corresponding java class, which is arguable. It's 
necessary to expose them as strong type class if they meant to be used directly 
by end developer. But they really should only be internal implementation of the 
API. It's overkill to expose them as a Type instead of just a pointer.
3. All the classes (except mxnet.java) are hand written.
4. API mapping are hand coded as well.



## Performance

JavaCPP native library load takes a long time, it takes average *2.6 seconds* 
to initialize libmxnet.so with javacpp.

Loader.load(org.bytedeco.mxnet.global.mxnet.class);



## Issues

The open source code on github doesn't match the binary release on maven 
central:

*  the maven group and the java package name are different.
* c predict API is not included in maven version
* Example code doesn't work with maven artifacts, it can only build with 
snapshot version locally.





-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-663138354

Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-23 Thread Lanking
@saudet Thanks for your proposal. I have three questions would like to ask you:

1. If we adopt JavaCpp package, how will that be consumed? Under byteco or 
apache MXNet? Essentially from our previous discussion, we really don't want 
another 3rdparty checkin.

2. Can you also do a benchmark on the MXNet's API's performance and possibly 
share the reproducible code? I did have

3. We do have some additional technical issue with JavaCpp, is there any plan 
to fix it? (I will put it into a separate comment since it is really big.

-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-663137329

Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-23 Thread Carin Meier
@saudet @szha - I think we be a good path forward (from the Clojure perspective)

-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-663085890

Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-23 Thread Sheng Zha
@saudet this looks awesome! An 18% improvement in throughput is quite 
significant for switching the way of integration for a frontend binding. I 
think we should definitely start with this offering. @lanking520 @gigasquid 
what do you think?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-663064169

Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 JVM Language development (#17783)

2020-07-23 Thread Samuel Audet
Hi, instead of JNA, I would be happy to provide bindings for the C API and 
maintain packages based on the JavaCPP Presets here:
https://github.com/bytedeco/javacpp-presets/tree/master/mxnet
JavaCPP adds no overhead, unlike JNA, and is often faster than manually written 
JNI. Plus JavaCPP provides more tools than JNA to automate the process of 
parsing header files as well as packaging native libraries in JAR files. I have 
been maintaining modules for TensorFlow based on JavaCPP, and we actually got a 
boost in performance when compared to the original JNI code:
https://github.com/tensorflow/java/pull/18#issuecomment-579600568
I would be able to do the same for MXNet and maintain the result in a 
repository of your choice. Let me know if this sounds interesting! BTW, the 
developers of DJL also seem opened to switch from JNA to JavaCPP even though it 
is not a huge priority. Still, standardizing how native bindings are created 
and loaded with other libraries for which JavaCPP is pretty much already the 
standard (such as OpenCV, TensorFlow, CUDA, FFmpeg, LLVM, Tesseract) could go a 
long way in alleviating concerns of stability.

-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17783#issuecomment-662994965

RE: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-23 Thread Chen, Ciyong
Thanks for your kindly reminder Macro, I will send out the vote on general@ 
later.

Regards,
-Ciyong

-Original Message-
From: Marco de Abreu  
Sent: Thursday, July 23, 2020 3:44 PM
To: dev@mxnet.incubator.apache.org
Cc: d...@mxnet.apache.org; Bob Paulin ; Henri Yandell 
; Jason Dai ; Markus Weimer 
; Michael Wall 
Subject: Re: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 
1.7.0.rc1

Thanks! Please don't forget the release vote on incubator.

-Marco

On Thu, Jul 23, 2020, 9:37 AM Chen, Ciyong  wrote:

> Dear MXNet community,
>
> I'm happy to announce the results of the vote.
> This vote passes with 10 +1 votes (3 binding) and no 0 or -1 votes.
>
> +1 votes
> * Sheng Zha / binding
> * Tao Lv / binding
> * Zhi Zhang / binding
> * Aston Zhang
> * Patric Zhao
> * Skalicky Sam
> * Karan Jariwala
> * Chaitanya Bapat
> * Kshitij Kalambarkar
> * Patrick Mu
>
> 0 votes
> * No votes
>
> -1 votes
> * No votes
>
> Vote thread can be found here [1]. The list of members can be found 
> here [2].
> I'll continue with the release process and the release announcement 
> will follow in the next few days.
>
> Best regards,
> Ciyong Chen
>
> [1]
> https://lists.apache.org/thread.html/r525a961a10f69bdfb255c64f0be0589b
> b70efdd880c1be87c81c0c06%40%3Cdev.mxnet.apache.org%3E
> [2] http://incubator.apache.org/projects/mxnet.html
>
>


Re: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-23 Thread Marco de Abreu
Thanks! Please don't forget the release vote on incubator.

-Marco

On Thu, Jul 23, 2020, 9:37 AM Chen, Ciyong  wrote:

> Dear MXNet community,
>
> I'm happy to announce the results of the vote.
> This vote passes with 10 +1 votes (3 binding) and no 0 or -1 votes.
>
> +1 votes
> * Sheng Zha / binding
> * Tao Lv / binding
> * Zhi Zhang / binding
> * Aston Zhang
> * Patric Zhao
> * Skalicky Sam
> * Karan Jariwala
> * Chaitanya Bapat
> * Kshitij Kalambarkar
> * Patrick Mu
>
> 0 votes
> * No votes
>
> -1 votes
> * No votes
>
> Vote thread can be found here [1]. The list of members can be found here
> [2].
> I'll continue with the release process and the release announcement will
> follow in the next few days.
>
> Best regards,
> Ciyong Chen
>
> [1]
> https://lists.apache.org/thread.html/r525a961a10f69bdfb255c64f0be0589bb70efdd880c1be87c81c0c06%40%3Cdev.mxnet.apache.org%3E
> [2] http://incubator.apache.org/projects/mxnet.html
>
>


Re: assimilation of mshadow into the MXNet codebase

2020-07-23 Thread Sheng Zha
ï»żHi,

No, I don’t think we used ICLAs for mshadow before.

Out of the 42 people who made more than 1 commit or more than 10 lines of code 
change to mshadow, 26 signed ICLA with Apache (and additionally one member is 
unfortunately deceased...). Would this be a better criteria as “the major 
ones”? I wasn’t part of the initial code donation or the initial PPMC group, so 
apologies if the questions were silly.

I think the rest of the commits are manageable so that I could do a revert and 
rework for those commits if/when necessary.

Regards,
Sheng

> On Jul 22, 2020, at 11:50 PM, Justin Mclean  wrote:
> 
> ï»żHi,
> 
>> Thanks for clarifying. All contributors who made more than 10 commits to 
>> msahdow before are committers of MXNet, so their ICLAs should already be on 
>> file: tqchen, bingxu, eric.xie, sxjscience, mli, yajiedesign [1]. If you 
>> think this is OK, one of the mentors or I can start the notification.
> 
> 
> What about the other 60 contributors? More than 10 commits is not a line I 
> would feel comfortable with. You need to be able to account for the IP 
> provenance of every line of code, just like in your initial code donation. It 
> would probably be best to make a list all contributors and if they have an 
> ICLA or not. Did the mshadow project use ICLAs? If so that may also help.
> 
> Thanks,
> Justin


Re: assimilation of mshadow into the MXNet codebase

2020-07-22 Thread Justin Mclean
Hi,

> Thanks for clarifying. All contributors who made more than 10 commits to 
> msahdow before are committers of MXNet, so their ICLAs should already be on 
> file: tqchen, bingxu, eric.xie, sxjscience, mli, yajiedesign [1]. If you 
> think this is OK, one of the mentors or I can start the notification.


What about the other 60 contributors? More than 10 commits is not a line I 
would feel comfortable with. You need to be able to account for the IP 
provenance of every line of code, just like in your initial code donation. It 
would probably be best to make a list all contributors and if they have an ICLA 
or not. Did the mshadow project use ICLAs? If so that may also help.

Thanks,
Justin

Re: assimilation of mshadow into the MXNet codebase

2020-07-22 Thread Sheng Zha
Thanks for clarifying. All contributors who made more than 10 commits to
msahdow before are committers of MXNet, so their ICLAs should already be on
file: tqchen, bingxu, eric.xie, sxjscience, mli, yajiedesign [1]. If you
think this is OK, one of the mentors or I can start the notification.

Regards,
Sheng

[1] https://github.com/dmlc/mshadow/graphs/contributors

On Wed, Jul 22, 2020 at 10:37 PM Justin Mclean 
wrote:

> Hi,
>
> > Thank you, Justin. Though I’m still uncertain about what the definition
> of IP clearance process is.
>
> The bit you quoted there is for an initial code base, it the second part
> of that document you need to look at.
>
> In short as well as the SGA you need to get signed ICLA from all of the
> contributors to the code base. It might be OK to just get the major ones
> depending on the type of contributions. You then need to notify the
> incubator of the IP clearance and see if they have any questions about it.
>
> Here’s an example:
>
> https://lists.apache.org/thread.html/r750880f7295c1a8c31c99e7a40f3466c177bd714254d0c98a506dede%40%3Cgeneral.incubator.apache.org%3E
>
> Thanks,
> Justin


Re: assimilation of mshadow into the MXNet codebase

2020-07-22 Thread Justin Mclean
Hi,

> Thank you, Justin. Though I’m still uncertain about what the definition of IP 
> clearance process is.

The bit you quoted there is for an initial code base, it the second part of 
that document you need to look at.

In short as well as the SGA you need to get signed ICLA from all of the 
contributors to the code base. It might be OK to just get the major ones 
depending on the type of contributions. You then need to notify the incubator 
of the IP clearance and see if they have any questions about it.

Here’s an example:
https://lists.apache.org/thread.html/r750880f7295c1a8c31c99e7a40f3466c177bd714254d0c98a506dede%40%3Cgeneral.incubator.apache.org%3E

Thanks,
Justin

Re: assimilation of mshadow into the MXNet codebase

2020-07-22 Thread Sheng Zha
Thank you, Justin. Though I’m still uncertain about what the definition of IP 
clearance process is, I found the following paragraphs that seem relevant. 
Sounds like we need three votes from our mentors here for this acceptance. If 
that’s the case, I can start a vote on it.

Regards,
Sheng

> The Incubator PMC must approve the clearance. This indicates that the project 
> is happy to receive the code donated. When a new podling is created, this is 
> done by the identification of existing codebases in the proposal. Otherwise, 
> the IPMC delegates this decision to the PPMC.
> As usual, three binding votes are required. So, Mentors need to be involved 
> in IP clearance for podlings. If too few binding VOTEs are posted on list, 
> the VOTE will need to be posted to the general list for ratification.


> On Jul 22, 2020, at 6:31 PM, Justin Mclean  wrote:
> 
> ï»żHi,
> 
> See also:
> https://incubator.apache.org/guides/ip_clearance.html
> 
> Thanks,
> Justin


Re: assimilation of mshadow into the MXNet codebase

2020-07-22 Thread Justin Mclean
Hi,

See also:
https://incubator.apache.org/guides/ip_clearance.html

Thanks,
Justin


Re: assimilation of mshadow into the MXNet codebase

2020-07-22 Thread Justin Mclean
HI,

> Yes and yes. I filed the software grant and received confirmation from 
> secretary@.

As well as the software grant the incline code base needs to go through IP 
clearance. See [1] option 2.

IP clearance involves making sure all all contributors have signed ICLAs and 
there are no license or other IP issues and getting IP clearance from the 
incubator. [2]

Thanks,
Justin

1. https://www.apache.org/foundation/how-it-works/legal.html#incoming-code
2. https://incubator.apache.org/ip-clearance/

Re: assimilation of mshadow into the MXNet codebase

2020-07-22 Thread Sheng Zha
Hi Justin,

Yes and yes. I filed the software grant and received confirmation from 
secretary@.

I’m not sure if I should be updating the page, and if so, how.

Regards,
Sheng

> On Jul 22, 2020, at 1:59 AM, Justin Mclean  wrote:
> 
> ï»żHi,
> 
> Has the IP clearance process been followed? I don't see it listed on this 
> page [1]
> 
> Does the current release being voted on contain this code?
> 
> Thanks,
> Justin
> 
> 1. https://incubator.apache.org/ip-clearance/


Re: [apache/incubator-mxnet] [RFC] Apache MXNet 2.0 Roadmap (#16167)

2020-07-22 Thread Sheng Zha
@fhieber we are planning to release the first public beta on this somewhere in 
August. At the moment we are finalizing some API changes and also validating 
them in GluonNLP. We will publish a transition doc as part of the public beta.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/16167#issuecomment-662620865

Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-22 Thread Joshua Z. Zhang
y be fixed in the next release? Yes
>>>>>> 
>>>>>> I vote with:
>>>>>> [x] +1 release the software
>>>>>> 
>>>>>> 
>>>>>> On 2020/07/20 17:25:50, "Skalicky, Sam" >> 
>>>>>> wrote:
>>>>>>> +1
>>>>>>> 
>>>>>>> Tested:
>>>>>>> - Make flow building from source, verified all
>> example/extensions/*
>>>>> work
>>>>>> correctly
>>>>>>> - staticbuild flow cpu & cu102 variants producing the pip wheels,
>>>>> tested
>>>>>> with custom extension library
>>>>>>> 
>>>>>>> Sam
>>>>>>> 
>>>>>>> ï»żOn 7/20/20, 4:07 AM, "Chen, Ciyong" 
>> wrote:
>>>>>>> 
>>>>>>>CAUTION: This email originated from outside of the
>> organization.
>>>> Do
>>>>>> not click links or open attachments unless you can confirm the
>> sender
>>>> and
>>>>>> know the content is safe.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>Thanks Aston, Patric for the vote.
>>>>>>> 
>>>>>>>Hi Community,
>>>>>>> 
>>>>>>>I would like to call for action to test/validate/vote for the
>>>>>> release candidate (1.7.0.rc1).
>>>>>>>As we've not reached the quorum, I would like to extend the
>>>> voting
>>>>>> process to July 22, 23:59:59 PST.
>>>>>>>Please prepare your time and provide feedback if you've tried
>>>> with
>>>>>> the pre-released code base, thanks!
>>>>>>> 
>>>>>>>Best Regards,
>>>>>>>Ciyong
>>>>>>> 
>>>>>>>-Original Message-
>>>>>>>From: Zhao, Patric 
>>>>>>>Sent: Monday, July 20, 2020 11:36 AM
>>>>>>>To: dev@mxnet.incubator.apache.org
>>>>>>>Cc: d...@mxnet.apache.org; Bob Paulin ; Henri
>>>>>> Yandell ; Jason Dai ;
>> Markus
>>>>>> Weimer ; Michael Wall 
>>>>>>>Subject: RE: [VOTE] Release Apache MXNet (incubating) version
>>>>>> 1.7.0.rc1
>>>>>>> 
>>>>>>>+1
>>>>>>> 
>>>>>>>Passed the performance benchmarking for CPU tests and no
>>>> regression
>>>>>> is found.
>>>>>>> 
>>>>>>> 
>>>>>>>> -Original Message-
>>>>>>>> From: Aston Zhang 
>>>>>>>> Sent: Sunday, July 19, 2020 1:45 PM
>>>>>>>> To: dev@mxnet.incubator.apache.org
>>>>>>>> Cc: d...@mxnet.apache.org; Bob Paulin ;
>> Henri
>>>>>> Yandell
>>>>>>>> ; Jason Dai ;
>> Markus
>>>>>> Weimer
>>>>>>>> ; Michael Wall 
>>>>>>>> Subject: Re: [VOTE] Release Apache MXNet (incubating)
>> version
>>>>>>>> 1.7.0.rc1
>>>>>>>> 
>>>>>>>> +1
>>>>>>>> Passed d2l-en v0.14.1:
>>>>>>>> https://github.com/d2l-ai/d2l-en/releases/tag/v0.14.1
>>>>>>>> 
>>>>>>>> On Thu, Jul 16, 2020 at 2:34 AM Chen, Ciyong <
>>>>>> ciyong.c...@intel.com> wrote:
>>>>>>>> 
>>>>>>>>> Dear MXNet community,
>>>>>>>>> 
>>>>>>>>> This is the vote to release Apache MXNet (incubating)
>> version
>>>>>> 1.7.0.
>>>>>>>>> Voting will start 16th July 23:59:59 PST and close on
>> 19th
>>>> July
>>>>>>>>> 23:59:59 PST.
>>>>>>>>> 
>>>>>>>>> Link to release notes:
>>>>>>>>> 
>>>>>> 
>> https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+note
>>>>>>>>> s
>>>>>>>>> 
>>>>>>>>> Link to release candidate:
>>>>>>>>> 
>>>>> https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc1
>>>>>>>>> 
>>>>>>>>> Link to source and signatures on apache dist server:
>>>>>>>>> 
>>>>> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc1
>>>>>>>>> 
>>>>>>>>> Please remember to TEST first before voting accordingly:
>>>>>>>>> +1 = approve
>>>>>>>>> +0 = no opinion
>>>>>>>>> -1 = disapprove (provide reason)
>>>>>>>>> 
>>>>>>>>> Here's the changes comparing to 1.7.0.rc0:
>>>>>>>>> 
>>>>>>>>>  *   Revert "Fix memory leaks in Gluon (#18328) (#18358)
>>>>>> (#18692)
>>>>>>>>>  *   revise activations (#18700)
>>>>>>>>>  *   Fix the monitor_callback invalid issue during
>>>> calibration
>>>>>> with
>>>>>>>>> variable input shapes (#18632) (#18703)
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Best regards,
>>>>>>>>> Ciyong Chen
>>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> --
>>>> *Chaitanya Prakash Bapat*
>>>> *+1 (973) 953-6299*
>>>> 
>>>> [image: https://www.linkedin.com//in/chaibapat25]
>>>> <https://github.com/ChaiBapchya>[image:
>> https://www.facebook.com/chaibapat
>>>> ]
>>>> <https://www.facebook.com/chaibapchya>[image:
>>>> https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya
>>> [image:
>>>> https://www.linkedin.com//in/chaibapat25]
>>>> <https://www.linkedin.com//in/chaibapchya/>
>>>> 
>>> 
>> 



Re: [apache/incubator-mxnet] [RFC] Apache MXNet 2.0 Roadmap (#16167)

2020-07-22 Thread Felix Hieber
@szha is there a recent estimate on the timeline for MXNet 2.0? Would you 
recommend to develop downstream toolkits (e.g. Sockeye) against the master 
branch now or rather wait a little bit longer?
Is there already documentation on how to transition MXNet 1.x projects to 2.x?

-- 
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/16167#issuecomment-662345601

Re: assimilation of mshadow into the MXNet codebase

2020-07-22 Thread Justin Mclean
Hi,

Has the IP clearance process been followed? I don't see it listed on this page 
[1]

Does the current release being voted on contain this code?

Thanks,
Justin

1. https://incubator.apache.org/ip-clearance/


Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-22 Thread Tao Lv
> > Tested:
> > > > > > - Make flow building from source, verified all
> example/extensions/*
> > > > work
> > > > > correctly
> > > > > > - staticbuild flow cpu & cu102 variants producing the pip wheels,
> > > > tested
> > > > > with custom extension library
> > > > > >
> > > > > > Sam
> > > > > >
> > > > > > ï»żOn 7/20/20, 4:07 AM, "Chen, Ciyong" 
> wrote:
> > > > > >
> > > > > > CAUTION: This email originated from outside of the
> organization.
> > > Do
> > > > > not click links or open attachments unless you can confirm the
> sender
> > > and
> > > > > know the content is safe.
> > > > > >
> > > > > >
> > > > > >
> > > > > > Thanks Aston, Patric for the vote.
> > > > > >
> > > > > > Hi Community,
> > > > > >
> > > > > > I would like to call for action to test/validate/vote for the
> > > > > release candidate (1.7.0.rc1).
> > > > > > As we've not reached the quorum, I would like to extend the
> > > voting
> > > > > process to July 22, 23:59:59 PST.
> > > > > > Please prepare your time and provide feedback if you've tried
> > > with
> > > > > the pre-released code base, thanks!
> > > > > >
> > > > > > Best Regards,
> > > > > > Ciyong
> > > > > >
> > > > > > -Original Message-
> > > > > > From: Zhao, Patric 
> > > > > > Sent: Monday, July 20, 2020 11:36 AM
> > > > > > To: dev@mxnet.incubator.apache.org
> > > > > > Cc: d...@mxnet.apache.org; Bob Paulin ; Henri
> > > > > Yandell ; Jason Dai ;
> Markus
> > > > > Weimer ; Michael Wall 
> > > > > > Subject: RE: [VOTE] Release Apache MXNet (incubating) version
> > > > > 1.7.0.rc1
> > > > > >
> > > > > > +1
> > > > > >
> > > > > > Passed the performance benchmarking for CPU tests and no
> > > regression
> > > > > is found.
> > > > > >
> > > > > >
> > > > > > > -Original Message-
> > > > > > > From: Aston Zhang 
> > > > > > > Sent: Sunday, July 19, 2020 1:45 PM
> > > > > > > To: dev@mxnet.incubator.apache.org
> > > > > > > Cc: d...@mxnet.apache.org; Bob Paulin ;
> Henri
> > > > > Yandell
> > > > > > > ; Jason Dai ;
> Markus
> > > > > Weimer
> > > > > > > ; Michael Wall 
> > > > > > > Subject: Re: [VOTE] Release Apache MXNet (incubating)
> version
> > > > > > > 1.7.0.rc1
> > > > > > >
> > > > > > > +1
> > > > > > > Passed d2l-en v0.14.1:
> > > > > > > https://github.com/d2l-ai/d2l-en/releases/tag/v0.14.1
> > > > > > >
> > > > > > > On Thu, Jul 16, 2020 at 2:34 AM Chen, Ciyong <
> > > > > ciyong.c...@intel.com> wrote:
> > > > > > >
> > > > > > > > Dear MXNet community,
> > > > > > > >
> > > > > > > > This is the vote to release Apache MXNet (incubating)
> version
> > > > > 1.7.0.
> > > > > > > > Voting will start 16th July 23:59:59 PST and close on
> 19th
> > > July
> > > > > > > > 23:59:59 PST.
> > > > > > > >
> > > > > > > > Link to release notes:
> > > > > > > >
> > > > >
> https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+note
> > > > > > > > s
> > > > > > > >
> > > > > > > > Link to release candidate:
> > > > > > > >
> > > > https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc1
> > > > > > > >
> > > > > > > > Link to source and signatures on apache dist server:
> > > > > > > >
> > > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc1
> > > > > > > >
> > > > > > > > Please remember to TEST first before voting accordingly:
> > > > > > > > +1 = approve
> > > > > > > > +0 = no opinion
> > > > > > > > -1 = disapprove (provide reason)
> > > > > > > >
> > > > > > > > Here's the changes comparing to 1.7.0.rc0:
> > > > > > > >
> > > > > > > >   *   Revert "Fix memory leaks in Gluon (#18328) (#18358)
> > > > > (#18692)
> > > > > > > >   *   revise activations (#18700)
> > > > > > > >   *   Fix the monitor_callback invalid issue during
> > > calibration
> > > > > with
> > > > > > > > variable input shapes (#18632) (#18703)
> > > > > > > >
> > > > > > > >
> > > > > > > > Best regards,
> > > > > > > > Ciyong Chen
> > > > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > *Chaitanya Prakash Bapat*
> > > *+1 (973) 953-6299*
> > >
> > > [image: https://www.linkedin.com//in/chaibapat25]
> > > <https://github.com/ChaiBapchya>[image:
> https://www.facebook.com/chaibapat
> > > ]
> > > <https://www.facebook.com/chaibapchya>[image:
> > > https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya
> >[image:
> > > https://www.linkedin.com//in/chaibapat25]
> > > <https://www.linkedin.com//in/chaibapchya/>
> > >
> >
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-22 Thread Patrick Mu
+ 1

Test custom operators: all examples using custom operators are passing, no 
error or regression found

Ziyi

On 2020/07/22 06:56:46, Kshitij Kalambarkar  
wrote: 
> + 1
> 
> * Built from source on Ubuntu 18.04 with CUDA, CUDNN
> * Verified test_higher_order_grad.py
> 
> Great job!
> 
> On Wed, Jul 22, 2020 at 12:02 PM Chaitanya Bapat 
> wrote:
> 
> > +1
> >
> > - Built from source on Ubuntu18 with CUDA ON, USE_INT64_TENSOR_SIZE ON
> > - Verified large tensor tests work as expected on a p3.16xl instance [with
> > 8 Tesla V100 GPUs]
> > - Verified OpPerf utility works as expected.
> >
> > Steps followed:
> > https://gist.github.com/ChaiBapchya/8a5131932693d4ca47281368c752b726
> >
> > Thanks Ciyong for leading with the releases. Incredible job.
> >
> > Regards,
> > Chai
> >
> >
> > On Tue, 21 Jul 2020 at 23:05, Karan Jariwala 
> > wrote:
> >
> > > +1
> > >
> > > Build from source on Ubuntu 18 with CUDA/CUDNN/NCCL ON and verified with
> > > Horovod 0.19.5 by running unittest and integration tests.
> > >
> > > Thanks,
> > > Karan
> > >
> > > On Tue, Jul 21, 2020 at 10:23 PM Sheng Zha  wrote:
> > >
> > > > +1. I checked:
> > > >
> > > > [x] Are release files in correct location? Yes
> > > > [x] Do release files have the word incubating in their name? Yes
> > > > [x] Are the digital signature and hashes correct? Yes
> > > > [x] Does DISCLAIMER file exist? Yes, DISCLAIMER-WIP
> > > > [x] Do LICENSE and NOTICE files exists? Yes
> > > > [x] Is the LICENSE and NOTICE text correct?
> > > > Yes, though the license still reads "Copyright [] [name of
> > copyright
> > > > owner]", which needs correction.
> > > >
> > > > [x] Is the NOTICE year correct? Yes
> > > > [x] Un-included software dependencies are not mentioned in LICENSE or
> > > > NOTICE?
> > > > No. mshadow is now contributed to MXNet via software grant and should
> > be
> > > > removed from NOTICE.
> > > >
> > > > [x] License information is not mentioned in NOTICE? Confirmed
> > > >
> > > > Is there any 3rd party code contained inside the release? If so:
> > > > [x] Does the software have a compatible license? Yes. Minor issue:
> > > > Dual license in cmake/Modules/FindJeMalloc.cmake.
> > > >
> > > > [x] Are all software licenses mentioned in LICENSE? Yes
> > > > [x] Is the full text of the licenses (or pointers to it) in LICENSE?
> > Yes
> > > >
> > > > Is any of this code Apache licensed? Do they have NOTICE files? If so:
> > > > [x] Have relevant parts of those NOTICE files been added to this NOTICE
> > > > file?
> > > > No. TVM NOTICE file hasn't been included.
> > > >
> > > > [x] Do all source files have ASF headers?
> > > > Yes, except those in 3rdparty folder and those mentioned in license.
> > > > [x] Do the contents of the release match with what's tagged in version
> > > > control? Yes
> > > > [x] Are there any unexpected binary files in the release? No
> > > > [x] Can you compile from source? Are the instruction clear? Yes,
> > Makefile
> > > > is present and is straightforward.
> > > > Is the issue minor? Yes
> > > > Could it possibly be fixed in the next release? Yes
> > > >
> > > > I vote with:
> > > > [x] +1 release the software
> > > >
> > > >
> > > > On 2020/07/20 17:25:50, "Skalicky, Sam" 
> > > > wrote:
> > > > > +1
> > > > >
> > > > > Tested:
> > > > > - Make flow building from source, verified all example/extensions/*
> > > work
> > > > correctly
> > > > > - staticbuild flow cpu & cu102 variants producing the pip wheels,
> > > tested
> > > > with custom extension library
> > > > >
> > > > > Sam
> > > > >
> > > > > ï»żOn 7/20/20, 4:07 AM, "Chen, Ciyong"  wrote:
> > > > >
> > > > > CAUTION: This email originated from outside of the organization.
> > Do
> > > > not click links or open attachments unless you can confirm the sender
> > and
> > > > know the content is safe.
> > > > >
> > > > >
>

Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-21 Thread Kshitij Kalambarkar
+ 1

* Built from source on Ubuntu 18.04 with CUDA, CUDNN
* Verified test_higher_order_grad.py

Great job!

On Wed, Jul 22, 2020 at 12:02 PM Chaitanya Bapat 
wrote:

> +1
>
> - Built from source on Ubuntu18 with CUDA ON, USE_INT64_TENSOR_SIZE ON
> - Verified large tensor tests work as expected on a p3.16xl instance [with
> 8 Tesla V100 GPUs]
> - Verified OpPerf utility works as expected.
>
> Steps followed:
> https://gist.github.com/ChaiBapchya/8a5131932693d4ca47281368c752b726
>
> Thanks Ciyong for leading with the releases. Incredible job.
>
> Regards,
> Chai
>
>
> On Tue, 21 Jul 2020 at 23:05, Karan Jariwala 
> wrote:
>
> > +1
> >
> > Build from source on Ubuntu 18 with CUDA/CUDNN/NCCL ON and verified with
> > Horovod 0.19.5 by running unittest and integration tests.
> >
> > Thanks,
> > Karan
> >
> > On Tue, Jul 21, 2020 at 10:23 PM Sheng Zha  wrote:
> >
> > > +1. I checked:
> > >
> > > [x] Are release files in correct location? Yes
> > > [x] Do release files have the word incubating in their name? Yes
> > > [x] Are the digital signature and hashes correct? Yes
> > > [x] Does DISCLAIMER file exist? Yes, DISCLAIMER-WIP
> > > [x] Do LICENSE and NOTICE files exists? Yes
> > > [x] Is the LICENSE and NOTICE text correct?
> > > Yes, though the license still reads "Copyright [] [name of
> copyright
> > > owner]", which needs correction.
> > >
> > > [x] Is the NOTICE year correct? Yes
> > > [x] Un-included software dependencies are not mentioned in LICENSE or
> > > NOTICE?
> > > No. mshadow is now contributed to MXNet via software grant and should
> be
> > > removed from NOTICE.
> > >
> > > [x] License information is not mentioned in NOTICE? Confirmed
> > >
> > > Is there any 3rd party code contained inside the release? If so:
> > > [x] Does the software have a compatible license? Yes. Minor issue:
> > > Dual license in cmake/Modules/FindJeMalloc.cmake.
> > >
> > > [x] Are all software licenses mentioned in LICENSE? Yes
> > > [x] Is the full text of the licenses (or pointers to it) in LICENSE?
> Yes
> > >
> > > Is any of this code Apache licensed? Do they have NOTICE files? If so:
> > > [x] Have relevant parts of those NOTICE files been added to this NOTICE
> > > file?
> > > No. TVM NOTICE file hasn't been included.
> > >
> > > [x] Do all source files have ASF headers?
> > > Yes, except those in 3rdparty folder and those mentioned in license.
> > > [x] Do the contents of the release match with what's tagged in version
> > > control? Yes
> > > [x] Are there any unexpected binary files in the release? No
> > > [x] Can you compile from source? Are the instruction clear? Yes,
> Makefile
> > > is present and is straightforward.
> > > Is the issue minor? Yes
> > > Could it possibly be fixed in the next release? Yes
> > >
> > > I vote with:
> > > [x] +1 release the software
> > >
> > >
> > > On 2020/07/20 17:25:50, "Skalicky, Sam" 
> > > wrote:
> > > > +1
> > > >
> > > > Tested:
> > > > - Make flow building from source, verified all example/extensions/*
> > work
> > > correctly
> > > > - staticbuild flow cpu & cu102 variants producing the pip wheels,
> > tested
> > > with custom extension library
> > > >
> > > > Sam
> > > >
> > > > ï»żOn 7/20/20, 4:07 AM, "Chen, Ciyong"  wrote:
> > > >
> > > > CAUTION: This email originated from outside of the organization.
> Do
> > > not click links or open attachments unless you can confirm the sender
> and
> > > know the content is safe.
> > > >
> > > >
> > > >
> > > > Thanks Aston, Patric for the vote.
> > > >
> > > > Hi Community,
> > > >
> > > > I would like to call for action to test/validate/vote for the
> > > release candidate (1.7.0.rc1).
> > > > As we've not reached the quorum, I would like to extend the
> voting
> > > process to July 22, 23:59:59 PST.
> > > > Please prepare your time and provide feedback if you've tried
> with
> > > the pre-released code base, thanks!
> > > >
> > > > Best Regards,
> > > > Ciyong
> > > >
> > > > -Original Messa

Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-21 Thread Chaitanya Bapat
+1

- Built from source on Ubuntu18 with CUDA ON, USE_INT64_TENSOR_SIZE ON
- Verified large tensor tests work as expected on a p3.16xl instance [with
8 Tesla V100 GPUs]
- Verified OpPerf utility works as expected.

Steps followed:
https://gist.github.com/ChaiBapchya/8a5131932693d4ca47281368c752b726

Thanks Ciyong for leading with the releases. Incredible job.

Regards,
Chai


On Tue, 21 Jul 2020 at 23:05, Karan Jariwala 
wrote:

> +1
>
> Build from source on Ubuntu 18 with CUDA/CUDNN/NCCL ON and verified with
> Horovod 0.19.5 by running unittest and integration tests.
>
> Thanks,
> Karan
>
> On Tue, Jul 21, 2020 at 10:23 PM Sheng Zha  wrote:
>
> > +1. I checked:
> >
> > [x] Are release files in correct location? Yes
> > [x] Do release files have the word incubating in their name? Yes
> > [x] Are the digital signature and hashes correct? Yes
> > [x] Does DISCLAIMER file exist? Yes, DISCLAIMER-WIP
> > [x] Do LICENSE and NOTICE files exists? Yes
> > [x] Is the LICENSE and NOTICE text correct?
> > Yes, though the license still reads "Copyright [] [name of copyright
> > owner]", which needs correction.
> >
> > [x] Is the NOTICE year correct? Yes
> > [x] Un-included software dependencies are not mentioned in LICENSE or
> > NOTICE?
> > No. mshadow is now contributed to MXNet via software grant and should be
> > removed from NOTICE.
> >
> > [x] License information is not mentioned in NOTICE? Confirmed
> >
> > Is there any 3rd party code contained inside the release? If so:
> > [x] Does the software have a compatible license? Yes. Minor issue:
> > Dual license in cmake/Modules/FindJeMalloc.cmake.
> >
> > [x] Are all software licenses mentioned in LICENSE? Yes
> > [x] Is the full text of the licenses (or pointers to it) in LICENSE? Yes
> >
> > Is any of this code Apache licensed? Do they have NOTICE files? If so:
> > [x] Have relevant parts of those NOTICE files been added to this NOTICE
> > file?
> > No. TVM NOTICE file hasn't been included.
> >
> > [x] Do all source files have ASF headers?
> > Yes, except those in 3rdparty folder and those mentioned in license.
> > [x] Do the contents of the release match with what's tagged in version
> > control? Yes
> > [x] Are there any unexpected binary files in the release? No
> > [x] Can you compile from source? Are the instruction clear? Yes, Makefile
> > is present and is straightforward.
> > Is the issue minor? Yes
> > Could it possibly be fixed in the next release? Yes
> >
> > I vote with:
> > [x] +1 release the software
> >
> >
> > On 2020/07/20 17:25:50, "Skalicky, Sam" 
> > wrote:
> > > +1
> > >
> > > Tested:
> > > - Make flow building from source, verified all example/extensions/*
> work
> > correctly
> > > - staticbuild flow cpu & cu102 variants producing the pip wheels,
> tested
> > with custom extension library
> > >
> > > Sam
> > >
> > > ï»żOn 7/20/20, 4:07 AM, "Chen, Ciyong"  wrote:
> > >
> > > CAUTION: This email originated from outside of the organization. Do
> > not click links or open attachments unless you can confirm the sender and
> > know the content is safe.
> > >
> > >
> > >
> > > Thanks Aston, Patric for the vote.
> > >
> > > Hi Community,
> > >
> > > I would like to call for action to test/validate/vote for the
> > release candidate (1.7.0.rc1).
> > >     As we've not reached the quorum, I would like to extend the voting
> > process to July 22, 23:59:59 PST.
> > > Please prepare your time and provide feedback if you've tried with
> > the pre-released code base, thanks!
> > >
> > > Best Regards,
> > > Ciyong
> > >
> > > -Original Message-
> > > From: Zhao, Patric 
> > > Sent: Monday, July 20, 2020 11:36 AM
> > > To: dev@mxnet.incubator.apache.org
> > > Cc: d...@mxnet.apache.org; Bob Paulin ; Henri
> > Yandell ; Jason Dai ; Markus
> > Weimer ; Michael Wall 
> > > Subject: RE: [VOTE] Release Apache MXNet (incubating) version
> > 1.7.0.rc1
> > >
> > > +1
> > >
> > > Passed the performance benchmarking for CPU tests and no regression
> > is found.
> > >
> > >
> > > > -Original Message-
> > > > From: Aston Zhang 
> > > > Sent: Sunday, July 19, 2020 1:45 PM
> > > 

Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-21 Thread Karan Jariwala
+1

Build from source on Ubuntu 18 with CUDA/CUDNN/NCCL ON and verified with
Horovod 0.19.5 by running unittest and integration tests.

Thanks,
Karan

On Tue, Jul 21, 2020 at 10:23 PM Sheng Zha  wrote:

> +1. I checked:
>
> [x] Are release files in correct location? Yes
> [x] Do release files have the word incubating in their name? Yes
> [x] Are the digital signature and hashes correct? Yes
> [x] Does DISCLAIMER file exist? Yes, DISCLAIMER-WIP
> [x] Do LICENSE and NOTICE files exists? Yes
> [x] Is the LICENSE and NOTICE text correct?
> Yes, though the license still reads "Copyright [] [name of copyright
> owner]", which needs correction.
>
> [x] Is the NOTICE year correct? Yes
> [x] Un-included software dependencies are not mentioned in LICENSE or
> NOTICE?
> No. mshadow is now contributed to MXNet via software grant and should be
> removed from NOTICE.
>
> [x] License information is not mentioned in NOTICE? Confirmed
>
> Is there any 3rd party code contained inside the release? If so:
> [x] Does the software have a compatible license? Yes. Minor issue:
> Dual license in cmake/Modules/FindJeMalloc.cmake.
>
> [x] Are all software licenses mentioned in LICENSE? Yes
> [x] Is the full text of the licenses (or pointers to it) in LICENSE? Yes
>
> Is any of this code Apache licensed? Do they have NOTICE files? If so:
> [x] Have relevant parts of those NOTICE files been added to this NOTICE
> file?
> No. TVM NOTICE file hasn't been included.
>
> [x] Do all source files have ASF headers?
> Yes, except those in 3rdparty folder and those mentioned in license.
> [x] Do the contents of the release match with what's tagged in version
> control? Yes
> [x] Are there any unexpected binary files in the release? No
> [x] Can you compile from source? Are the instruction clear? Yes, Makefile
> is present and is straightforward.
> Is the issue minor? Yes
> Could it possibly be fixed in the next release? Yes
>
> I vote with:
> [x] +1 release the software
>
>
> On 2020/07/20 17:25:50, "Skalicky, Sam" 
> wrote:
> > +1
> >
> > Tested:
> > - Make flow building from source, verified all example/extensions/* work
> correctly
> > - staticbuild flow cpu & cu102 variants producing the pip wheels, tested
> with custom extension library
> >
> > Sam
> >
> > ï»żOn 7/20/20, 4:07 AM, "Chen, Ciyong"  wrote:
> >
> > CAUTION: This email originated from outside of the organization. Do
> not click links or open attachments unless you can confirm the sender and
> know the content is safe.
> >
> >
> >
> > Thanks Aston, Patric for the vote.
> >
> > Hi Community,
> >
> > I would like to call for action to test/validate/vote for the
> release candidate (1.7.0.rc1).
> > As we've not reached the quorum, I would like to extend the voting
> process to July 22, 23:59:59 PST.
> > Please prepare your time and provide feedback if you've tried with
> the pre-released code base, thanks!
> >
> > Best Regards,
> > Ciyong
> >
> > -Original Message-
> > From: Zhao, Patric 
> > Sent: Monday, July 20, 2020 11:36 AM
> > To: dev@mxnet.incubator.apache.org
> > Cc: d...@mxnet.apache.org; Bob Paulin ; Henri
> Yandell ; Jason Dai ; Markus
> Weimer ; Michael Wall 
> > Subject: RE: [VOTE] Release Apache MXNet (incubating) version
> 1.7.0.rc1
> >
> > +1
> >
> > Passed the performance benchmarking for CPU tests and no regression
> is found.
> >
> >
> > > -Original Message-
> > > From: Aston Zhang 
> > > Sent: Sunday, July 19, 2020 1:45 PM
> > > To: dev@mxnet.incubator.apache.org
> > > Cc: d...@mxnet.apache.org; Bob Paulin ; Henri
> Yandell
> > > ; Jason Dai ; Markus
> Weimer
> > > ; Michael Wall 
> > > Subject: Re: [VOTE] Release Apache MXNet (incubating) version
> > > 1.7.0.rc1
> > >
> > > +1
> > > Passed d2l-en v0.14.1:
> > > https://github.com/d2l-ai/d2l-en/releases/tag/v0.14.1
> > >
> > > On Thu, Jul 16, 2020 at 2:34 AM Chen, Ciyong <
> ciyong.c...@intel.com> wrote:
> > >
> > > > Dear MXNet community,
> > > >
> > > > This is the vote to release Apache MXNet (incubating) version
> 1.7.0.
> > > > Voting will start 16th July 23:59:59 PST and close on 19th July
> > > > 23:59:59 PST.
> > > >
> > > > Link to release notes:

Re: [DISCUSS] Examples Repo for 2.0

2020-07-21 Thread Sheng Zha
I created the repository at https://github.com/apache/incubator-mxnet-examples. 
Next, we will need to set up CI and move some examples there. I'm currently 
occupied and may not get to it before next month. If someone volunteers to help 
on this I'd appreciate it.

Best,
Sheng

On 2020/07/17 06:03:19, Chaitanya Bapat  wrote: 
> Testing nightly is the perfect middle. Daily/per commit would be too much
> and Never would be too less.
> 
> +1 for the proposal!
> 
> On Thu, 16 Jul 2020 at 02:08, Kshitij Kalambarkar <
> kshitijkalambar...@gmail.com> wrote:
> 
> > +1 Great Idea! It is quite useful to have working examples to refer to.
> >
> > On Thu, Jul 16, 2020 at 12:17 PM Marco de Abreu 
> > wrote:
> >
> > > +1 good idea!
> > >
> > > On Thu, Jul 16, 2020, 5:39 AM Skalicky, Sam 
> > > wrote:
> > >
> > > > +1 For regular testing, enhanced doc/tutorial
> > > >
> > > > > On Jul 15, 2020, at 7:40 PM, Sheng Zha  wrote:
> > > > >
> > > > > ï»żCAUTION: This email originated from outside of the organization. Do
> > > not
> > > > click links or open attachments unless you can confirm the sender and
> > > know
> > > > the content is safe.
> > > > >
> > > > >
> > > > >
> > > > > Hi,
> > > > >
> > > > > Over the years, MXNet accumulated many examples in the example folder
> > > > [1]. However, due to not testing them in the CI and in releases, many
> > of
> > > > them are currently broken. I'd like to propose that we create a new
> > > > examples repo for MXNet 2.0 similar to how PyTorch hosts them [2].
> > > Further
> > > > more, the new examples repo should be evaluated periodically (e.g.
> > > weekly)
> > > > against the master branch to ensure that they are not broken.
> > > > >
> > > > > Thoughts and comments are welcome.
> > > > >
> > > > > Sheng
> > > > >
> > > > > [1] https://github.com/apache/incubator-mxnet/tree/master/example
> > > > > [2] https://github.com/pytorch/examples
> > > >
> > >
> >
> 
> 
> -- 
> *Chaitanya Prakash Bapat*
> *+1 (973) 953-6299*
> 
> [image: https://www.linkedin.com//in/chaibapat25]
> [image: https://www.facebook.com/chaibapat]
> [image:
> https://twitter.com/ChaiBapchya] [image:
> https://www.linkedin.com//in/chaibapat25]
> 
> 


Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-21 Thread Sheng Zha
+1. I checked:

[x] Are release files in correct location? Yes
[x] Do release files have the word incubating in their name? Yes
[x] Are the digital signature and hashes correct? Yes
[x] Does DISCLAIMER file exist? Yes, DISCLAIMER-WIP
[x] Do LICENSE and NOTICE files exists? Yes
[x] Is the LICENSE and NOTICE text correct?
Yes, though the license still reads "Copyright [] [name of copyright 
owner]", which needs correction.

[x] Is the NOTICE year correct? Yes
[x] Un-included software dependencies are not mentioned in LICENSE or NOTICE?
No. mshadow is now contributed to MXNet via software grant and should be 
removed from NOTICE.

[x] License information is not mentioned in NOTICE? Confirmed

Is there any 3rd party code contained inside the release? If so:
[x] Does the software have a compatible license? Yes. Minor issue:
Dual license in cmake/Modules/FindJeMalloc.cmake.

[x] Are all software licenses mentioned in LICENSE? Yes
[x] Is the full text of the licenses (or pointers to it) in LICENSE? Yes

Is any of this code Apache licensed? Do they have NOTICE files? If so:
[x] Have relevant parts of those NOTICE files been added to this NOTICE file?
No. TVM NOTICE file hasn't been included.

[x] Do all source files have ASF headers?
Yes, except those in 3rdparty folder and those mentioned in license.
[x] Do the contents of the release match with what's tagged in version control? 
Yes
[x] Are there any unexpected binary files in the release? No
[x] Can you compile from source? Are the instruction clear? Yes, Makefile is 
present and is straightforward.
Is the issue minor? Yes
Could it possibly be fixed in the next release? Yes

I vote with:
[x] +1 release the software


On 2020/07/20 17:25:50, "Skalicky, Sam"  wrote: 
> +1
> 
> Tested:
> - Make flow building from source, verified all example/extensions/* work 
> correctly
> - staticbuild flow cpu & cu102 variants producing the pip wheels, tested with 
> custom extension library
> 
> Sam
> 
> ï»żOn 7/20/20, 4:07 AM, "Chen, Ciyong"  wrote:
> 
> CAUTION: This email originated from outside of the organization. Do not 
> click links or open attachments unless you can confirm the sender and know 
> the content is safe.
> 
> 
> 
> Thanks Aston, Patric for the vote.
> 
> Hi Community,
> 
> I would like to call for action to test/validate/vote for the release 
> candidate (1.7.0.rc1).
> As we've not reached the quorum, I would like to extend the voting 
> process to July 22, 23:59:59 PST.
> Please prepare your time and provide feedback if you've tried with the 
> pre-released code base, thanks!
> 
> Best Regards,
> Ciyong
> 
> -Original Message-
> From: Zhao, Patric 
> Sent: Monday, July 20, 2020 11:36 AM
>     To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org; Bob Paulin ; Henri Yandell 
> ; Jason Dai ; Markus Weimer 
> ; Michael Wall 
> Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1
> 
> +1
> 
> Passed the performance benchmarking for CPU tests and no regression is 
> found.
> 
> 
> > -Original Message-
> > From: Aston Zhang 
> > Sent: Sunday, July 19, 2020 1:45 PM
> > To: dev@mxnet.incubator.apache.org
> > Cc: d...@mxnet.apache.org; Bob Paulin ; Henri Yandell
> > ; Jason Dai ; Markus Weimer
> > ; Michael Wall 
> > Subject: Re: [VOTE] Release Apache MXNet (incubating) version
> > 1.7.0.rc1
> >
> > +1
> > Passed d2l-en v0.14.1:
> > https://github.com/d2l-ai/d2l-en/releases/tag/v0.14.1
> >
> > On Thu, Jul 16, 2020 at 2:34 AM Chen, Ciyong  
> wrote:
> >
> > > Dear MXNet community,
> > >
> > > This is the vote to release Apache MXNet (incubating) version 1.7.0.
> > > Voting will start 16th July 23:59:59 PST and close on 19th July
> > > 23:59:59 PST.
> > >
> > > Link to release notes:
> > > https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+note
> > > s
> > >
> > > Link to release candidate:
> > > https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc1
> > >
> > > Link to source and signatures on apache dist server:
> > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc1
> > >
> > > Please remember to TEST first before voting accordingly:
> > > +1 = approve
> > > +0 = no opinion
> > > -1 = disapprove (provide reason)
> > >
> > > Here's the changes comparing to 1.7.0.rc0:
> > >
> > >   *   Revert "Fix memory leaks in Gluon (#18328) (#18358) (#18692)
> > >   *   revise activations (#18700)
> > >   *   Fix the monitor_callback invalid issue during calibration with
> > > variable input shapes (#18632) (#18703)
> > >
> > >
> > > Best regards,
> > > Ciyong Chen
> > >
> 
> 


Re: [apache/incubator-mxnet] [RFC] Raising the toolchain requirements for MXNet 2 (#17968)

2020-07-20 Thread Leonard Lausen
Closed #17968.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17968#event-3568205765

Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-20 Thread Skalicky, Sam
+1

Tested:
- Make flow building from source, verified all example/extensions/* work 
correctly
- staticbuild flow cpu & cu102 variants producing the pip wheels, tested with 
custom extension library

Sam

ï»żOn 7/20/20, 4:07 AM, "Chen, Ciyong"  wrote:

CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and know the 
content is safe.



Thanks Aston, Patric for the vote.

Hi Community,

I would like to call for action to test/validate/vote for the release 
candidate (1.7.0.rc1).
As we've not reached the quorum, I would like to extend the voting process 
to July 22, 23:59:59 PST.
Please prepare your time and provide feedback if you've tried with the 
pre-released code base, thanks!

Best Regards,
Ciyong

-Original Message-
From: Zhao, Patric 
Sent: Monday, July 20, 2020 11:36 AM
To: dev@mxnet.incubator.apache.org
Cc: d...@mxnet.apache.org; Bob Paulin ; Henri Yandell 
; Jason Dai ; Markus Weimer 
; Michael Wall 
Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

+1

Passed the performance benchmarking for CPU tests and no regression is 
found.


> -Original Message-
> From: Aston Zhang 
> Sent: Sunday, July 19, 2020 1:45 PM
> To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org; Bob Paulin ; Henri Yandell
> ; Jason Dai ; Markus Weimer
> ; Michael Wall 
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version
> 1.7.0.rc1
>
> +1
> Passed d2l-en v0.14.1:
> https://github.com/d2l-ai/d2l-en/releases/tag/v0.14.1
>
> On Thu, Jul 16, 2020 at 2:34 AM Chen, Ciyong  
wrote:
>
> > Dear MXNet community,
> >
> > This is the vote to release Apache MXNet (incubating) version 1.7.0.
> > Voting will start 16th July 23:59:59 PST and close on 19th July
> > 23:59:59 PST.
> >
> > Link to release notes:
> > https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+note
> > s
> >
> > Link to release candidate:
> > https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc1
> >
> > Link to source and signatures on apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc1
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> > Here's the changes comparing to 1.7.0.rc0:
> >
> >   *   Revert "Fix memory leaks in Gluon (#18328) (#18358) (#18692)
> >   *   revise activations (#18700)
> >   *   Fix the monitor_callback invalid issue during calibration with
> > variable input shapes (#18632) (#18703)
> >
> >
> > Best regards,
> > Ciyong Chen
> >



RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-20 Thread Chen, Ciyong
Thanks Aston, Patric for the vote.

Hi Community,

I would like to call for action to test/validate/vote for the release candidate 
(1.7.0.rc1).
As we've not reached the quorum, I would like to extend the voting process to 
July 22, 23:59:59 PST.
Please prepare your time and provide feedback if you've tried with the 
pre-released code base, thanks!

Best Regards,
Ciyong

-Original Message-
From: Zhao, Patric  
Sent: Monday, July 20, 2020 11:36 AM
To: dev@mxnet.incubator.apache.org
Cc: d...@mxnet.apache.org; Bob Paulin ; Henri Yandell 
; Jason Dai ; Markus Weimer 
; Michael Wall 
Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

+1

Passed the performance benchmarking for CPU tests and no regression is found.


> -Original Message-
> From: Aston Zhang 
> Sent: Sunday, July 19, 2020 1:45 PM
> To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org; Bob Paulin ; Henri Yandell 
> ; Jason Dai ; Markus Weimer 
> ; Michael Wall 
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 
> 1.7.0.rc1
> 
> +1
> Passed d2l-en v0.14.1: 
> https://github.com/d2l-ai/d2l-en/releases/tag/v0.14.1
> 
> On Thu, Jul 16, 2020 at 2:34 AM Chen, Ciyong  wrote:
> 
> > Dear MXNet community,
> >
> > This is the vote to release Apache MXNet (incubating) version 1.7.0.
> > Voting will start 16th July 23:59:59 PST and close on 19th July
> > 23:59:59 PST.
> >
> > Link to release notes:
> > https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+note
> > s
> >
> > Link to release candidate:
> > https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc1
> >
> > Link to source and signatures on apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc1
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> > Here's the changes comparing to 1.7.0.rc0:
> >
> >   *   Revert "Fix memory leaks in Gluon (#18328) (#18358) (#18692)
> >   *   revise activations (#18700)
> >   *   Fix the monitor_callback invalid issue during calibration with
> > variable input shapes (#18632) (#18703)
> >
> >
> > Best regards,
> > Ciyong Chen
> >


RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-19 Thread Zhao, Patric
+1 

Passed the performance benchmarking for CPU tests and no regression is found.


> -Original Message-
> From: Aston Zhang 
> Sent: Sunday, July 19, 2020 1:45 PM
> To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org; Bob Paulin ; Henri Yandell
> ; Jason Dai ; Markus Weimer
> ; Michael Wall 
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1
> 
> +1
> Passed d2l-en v0.14.1: https://github.com/d2l-ai/d2l-en/releases/tag/v0.14.1
> 
> On Thu, Jul 16, 2020 at 2:34 AM Chen, Ciyong  wrote:
> 
> > Dear MXNet community,
> >
> > This is the vote to release Apache MXNet (incubating) version 1.7.0.
> > Voting will start 16th July 23:59:59 PST and close on 19th July
> > 23:59:59 PST.
> >
> > Link to release notes:
> > https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
> >
> > Link to release candidate:
> > https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc1
> >
> > Link to source and signatures on apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc1
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> > Here's the changes comparing to 1.7.0.rc0:
> >
> >   *   Revert "Fix memory leaks in Gluon (#18328) (#18358) (#18692)
> >   *   revise activations (#18700)
> >   *   Fix the monitor_callback invalid issue during calibration with
> > variable input shapes (#18632) (#18703)
> >
> >
> > Best regards,
> > Ciyong Chen
> >


Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc1

2020-07-18 Thread Aston Zhang
+1
Passed d2l-en v0.14.1: https://github.com/d2l-ai/d2l-en/releases/tag/v0.14.1

On Thu, Jul 16, 2020 at 2:34 AM Chen, Ciyong  wrote:

> Dear MXNet community,
>
> This is the vote to release Apache MXNet (incubating) version 1.7.0.
> Voting will start 16th July 23:59:59 PST and close on 19th July 23:59:59
> PST.
>
> Link to release notes:
> https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
>
> Link to release candidate:
> https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc1
>
> Link to source and signatures on apache dist server:
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc1
>
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
>
> Here's the changes comparing to 1.7.0.rc0:
>
>   *   Revert "Fix memory leaks in Gluon (#18328) (#18358) (#18692)
>   *   revise activations (#18700)
>   *   Fix the monitor_callback invalid issue during calibration with
> variable input shapes (#18632) (#18703)
>
>
> Best regards,
> Ciyong Chen
>


Re: [DISCUSS] Examples Repo for 2.0

2020-07-16 Thread Chaitanya Bapat
Testing nightly is the perfect middle. Daily/per commit would be too much
and Never would be too less.

+1 for the proposal!

On Thu, 16 Jul 2020 at 02:08, Kshitij Kalambarkar <
kshitijkalambar...@gmail.com> wrote:

> +1 Great Idea! It is quite useful to have working examples to refer to.
>
> On Thu, Jul 16, 2020 at 12:17 PM Marco de Abreu 
> wrote:
>
> > +1 good idea!
> >
> > On Thu, Jul 16, 2020, 5:39 AM Skalicky, Sam 
> > wrote:
> >
> > > +1 For regular testing, enhanced doc/tutorial
> > >
> > > > On Jul 15, 2020, at 7:40 PM, Sheng Zha  wrote:
> > > >
> > > > ï»żCAUTION: This email originated from outside of the organization. Do
> > not
> > > click links or open attachments unless you can confirm the sender and
> > know
> > > the content is safe.
> > > >
> > > >
> > > >
> > > > Hi,
> > > >
> > > > Over the years, MXNet accumulated many examples in the example folder
> > > [1]. However, due to not testing them in the CI and in releases, many
> of
> > > them are currently broken. I'd like to propose that we create a new
> > > examples repo for MXNet 2.0 similar to how PyTorch hosts them [2].
> > Further
> > > more, the new examples repo should be evaluated periodically (e.g.
> > weekly)
> > > against the master branch to ensure that they are not broken.
> > > >
> > > > Thoughts and comments are welcome.
> > > >
> > > > Sheng
> > > >
> > > > [1] https://github.com/apache/incubator-mxnet/tree/master/example
> > > > [2] https://github.com/pytorch/examples
> > >
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: [DISCUSS] Examples Repo for 2.0

2020-07-16 Thread Kshitij Kalambarkar
+1 Great Idea! It is quite useful to have working examples to refer to.

On Thu, Jul 16, 2020 at 12:17 PM Marco de Abreu 
wrote:

> +1 good idea!
>
> On Thu, Jul 16, 2020, 5:39 AM Skalicky, Sam 
> wrote:
>
> > +1 For regular testing, enhanced doc/tutorial
> >
> > > On Jul 15, 2020, at 7:40 PM, Sheng Zha  wrote:
> > >
> > > ï»żCAUTION: This email originated from outside of the organization. Do
> not
> > click links or open attachments unless you can confirm the sender and
> know
> > the content is safe.
> > >
> > >
> > >
> > > Hi,
> > >
> > > Over the years, MXNet accumulated many examples in the example folder
> > [1]. However, due to not testing them in the CI and in releases, many of
> > them are currently broken. I'd like to propose that we create a new
> > examples repo for MXNet 2.0 similar to how PyTorch hosts them [2].
> Further
> > more, the new examples repo should be evaluated periodically (e.g.
> weekly)
> > against the master branch to ensure that they are not broken.
> > >
> > > Thoughts and comments are welcome.
> > >
> > > Sheng
> > >
> > > [1] https://github.com/apache/incubator-mxnet/tree/master/example
> > > [2] https://github.com/pytorch/examples
> >
>


Re: [DISCUSS] Examples Repo for 2.0

2020-07-15 Thread Marco de Abreu
+1 good idea!

On Thu, Jul 16, 2020, 5:39 AM Skalicky, Sam 
wrote:

> +1 For regular testing, enhanced doc/tutorial
>
> > On Jul 15, 2020, at 7:40 PM, Sheng Zha  wrote:
> >
> > ï»żCAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
> >
> >
> >
> > Hi,
> >
> > Over the years, MXNet accumulated many examples in the example folder
> [1]. However, due to not testing them in the CI and in releases, many of
> them are currently broken. I'd like to propose that we create a new
> examples repo for MXNet 2.0 similar to how PyTorch hosts them [2]. Further
> more, the new examples repo should be evaluated periodically (e.g. weekly)
> against the master branch to ensure that they are not broken.
> >
> > Thoughts and comments are welcome.
> >
> > Sheng
> >
> > [1] https://github.com/apache/incubator-mxnet/tree/master/example
> > [2] https://github.com/pytorch/examples
>


Re: [DISCUSS] Examples Repo for 2.0

2020-07-15 Thread Skalicky, Sam
+1 For regular testing, enhanced doc/tutorial

> On Jul 15, 2020, at 7:40 PM, Sheng Zha  wrote:
> 
> ï»żCAUTION: This email originated from outside of the organization. Do not 
> click links or open attachments unless you can confirm the sender and know 
> the content is safe.
> 
> 
> 
> Hi,
> 
> Over the years, MXNet accumulated many examples in the example folder [1]. 
> However, due to not testing them in the CI and in releases, many of them are 
> currently broken. I'd like to propose that we create a new examples repo for 
> MXNet 2.0 similar to how PyTorch hosts them [2]. Further more, the new 
> examples repo should be evaluated periodically (e.g. weekly) against the 
> master branch to ensure that they are not broken.
> 
> Thoughts and comments are welcome.
> 
> Sheng
> 
> [1] https://github.com/apache/incubator-mxnet/tree/master/example
> [2] https://github.com/pytorch/examples


Re: [apache/incubator-mxnet] [RFC] Use TVMOp with GPU & Build without libcuda.so in CI (#18716)

2020-07-15 Thread Sheng Zha
Instead of linking tvm to mxnet, can we use TVM to generate source code and 
test as custom c++ operator?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/18716#issuecomment-659126976

Re: [apache/incubator-mxnet] [RFC] Use TVMOp with GPU & Build without libcuda.so in CI (#18716)

2020-07-15 Thread Yizhi Liu
I'm fine to disable tvm op (or mark it as experimental) for now, if it does 
need another 4 - 6 weeks to fully address the underlying problem, as we have 
some more urgent tasks on numpy side.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/18716#issuecomment-659071604

Re: [apache/incubator-mxnet] [RFC] MXNet 2.0 API Deprecation (#17676)

2020-07-15 Thread Leonard Lausen
NNPACK is currently only supported in the Makefile build 
(https://github.com/apache/incubator-mxnet/issues/15974), which will be 
removed. I think oneDNN (mkldnn) replaced it and we can remove it. Any concerns?

-- 
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17676#issuecomment-659039903

Re: [apache/incubator-mxnet] [RFC] MXNet website improvements (#17982)

2020-07-15 Thread Chaitanya Prakash Bapat
https://github.com/apache/incubator-mxnet/issues/18719
- beta.mxnet.io redirects to mxnet.incubator.apache.org/ landing page [instead 
of specific API]
- google index needs to be updated to point to mxnet official API page [instead 
of beta site]

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17982#issuecomment-659006955

Re: [apache/incubator-mxnet] [RFC] Use TVMOp with GPU & Build without libcuda.so in CI (#18716)

2020-07-15 Thread Leonard Lausen
> Violates the effort of removing libcuda.so totally, (would be great if 
> someone can elaborate the motivation behind it).

Many customers use a single mxnet build that supports gpu features and deploy 
it to both gpu and cpu machines. Due to the way how cuda containers are 
designed, libcuda.so won't be present on the cpu machines. That's why it's 
better to dlopen(cuda) only once needed. This not only affects tvmop but als 
nvrtc feature in mxnet.

Using the stubs is a workaround for using dlopen, but adds additional 
requirements for modifying the LD_LIBRARY_PATH on users cpu machines. That's 
not always feasible for users and for mxnet 1.6, which introduced nvrtc, users 
typically just disable the nvrtc feature to be able to deploy the libmxnet.so 
to both cpu and gpu machines. 

Why not fix the underlying problem and then enable tvmop feature?

> Also, When setting -DUSE_TVM_OP=OFF the CI checks would be stuck. 

That doesn't make sense as we are running CI successfully with tvm op disabled 
since a couple of months? Maybe you ran into some unrelated flakyness and need 
to retrigger the run? 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/18716#issuecomment-658846227

Re: [apache/incubator-mxnet] [RFC] Use TVMOp with GPU & Build without libcuda.so in CI (#18716)

2020-07-14 Thread Jinbo Ci
@leezu Would you please take a look? Thank you!

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/18716#issuecomment-658583360

RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-14 Thread Chen, Ciyong
Hi MXNet Community,

I am canceling this vote as there's an issue which broke the Gluon CV Yolo and 
AutoGluon functionality.
Thanks Ziyi and Xingjian to root cause and fix the issue[1]. 

And the new code base will involve another two fixes (for Gluon activation[2] 
and monitor fix[3] respectively). 
I will update the artifacts and start a new vote for rc1 in the following days.

Thanks for everyone's help! Please let me know if there's any other issue with 
1.7.0.

[1] https://github.com/apache/incubator-mxnet/pull/18692
[2] https://github.com/apache/incubator-mxnet/pull/18700
[3] https://github.com/apache/incubator-mxnet/pull/18703

Thanks,
-Ciyong

-Original Message-
From: Chen, Ciyong  
Sent: Tuesday, July 14, 2020 10:13 AM
To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

Thanks all for the effort to double check the performance status and the 
valuable comments, then let's not taking it as a blocker and moving forward 
with the 1.7.0 release process.

Thanks,
-Ciyong

-Original Message-
From: Skalicky, Sam  
Sent: Tuesday, July 14, 2020 4:41 AM
To: dev@mxnet.incubator.apache.org; lau...@apache.org; d...@mxnet.apache.org
Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

That’s a good point, 1.6 did have a performance regression since it dropped 
MKLML to simplify build an fix licensing. 2.0 will have performance degradation 
too in favor of new features. Clearly the community is focusing on features 
rather than performance, at least we're consistent :-)

I would prefer we move forward with the 1.7.0 release and consider performance 
fixes for 1.7.1 (like we did for 1.3.1/1.4.1)

Sam

ï»żOn 7/13/20, 1:36 PM, "Leonard Lausen"  wrote:

CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and know the 
content is safe.



One of the selling points of MXNet is (or used to be) speed and having 
multiple
releases in series with speed regressions may not be acceptable to users 
that
adopted MXNet based on the speed advantage. Should we vote on a 1.7 Beta 
release
and only vote on 1.7 final release once the regressions have been fixed?

On Mon, 2020-07-13 at 19:33 +, Patrick Mu wrote:
> It happens only on CPU, and I did more runs and found that the runtime
> fluctuates very badly, but the average regression is ~10%.
>
>
> Through the previous benchmarks I also found some worse regression 
comparing
> 1.6 to 1.5 like inception inference on CPU and those regression was not
> caught.
>
> My 2-cent is it might not be a blocker for the release, and we can have 
room
> for improvement for upcoming 2.0 and 1.7.1 if necessary
>
> Ziyi
>
> On 2020/07/13 08:40:32, "Chen, Ciyong"  wrote:
> > Thanks Ziyi,
> >
> > May I know which platform did you notice the performance regression, 
CPU or
> > GPU? ~20% regression would be a large gap.
> >
> > Thanks,
> > -Ciyong
> >
> > -Original Message-
> > From: Patrick Mu 
> > Sent: Monday, July 13, 2020 4:13 PM
> > To: d...@mxnet.apache.org
> > Subject: Re: RE: [VOTE] Release Apache MXNet (incubating) version 
1.7.0.rc0
> >
> > Hi Ciyong,
> >
> > I have reverted the commit, and I am able to train Yolov3 with no 
problem.
> >
> > However I also noticed there is a ~20% regression in 1.7 comparing with 
1.6
> > in inference Yolov3 with Module API, so we are going to discuss 
tomorrow if
> > that would be an issue for 1.7.
> >
> > Thanks,
> > Ziyi
> >
> > On 2020/07/13 02:19:28, "Chen, Ciyong"  wrote:
> > > Hi Ziyi, Xingjian,
> > >
> > > Thanks for reporting the issues from GluonCV/AutoGluon perspective.
> > > I just did a quick try by reverting the
> > > https://github.com/apache/incubator-mxnet/pull/18358, then the 
behavior is
> > > same as 1.6.0 with the cases in the gist (
> > > https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> > >
> > > Considering there's many end-users using Gluon based API/models, and
> > > introducing a new patch to fix this issue could be risky, so I agree 
that
> > > reverting this PR (#18358) might be the best option for the 1.7.0 
release.
> > > But I'm considering is there any other test cases to cover this 
feature,
> > > which could be helpful to track this kind of code changes in future, 
or
> > > 

RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-13 Thread Chen, Ciyong
Thanks all for the effort to double check the performance status and the 
valuable comments, then let's not taking it as a blocker and moving forward 
with the 1.7.0 release process.

Thanks,
-Ciyong

-Original Message-
From: Skalicky, Sam  
Sent: Tuesday, July 14, 2020 4:41 AM
To: dev@mxnet.incubator.apache.org; lau...@apache.org; d...@mxnet.apache.org
Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

That’s a good point, 1.6 did have a performance regression since it dropped 
MKLML to simplify build an fix licensing. 2.0 will have performance degradation 
too in favor of new features. Clearly the community is focusing on features 
rather than performance, at least we're consistent :-)

I would prefer we move forward with the 1.7.0 release and consider performance 
fixes for 1.7.1 (like we did for 1.3.1/1.4.1)

Sam

ï»żOn 7/13/20, 1:36 PM, "Leonard Lausen"  wrote:

CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and know the 
content is safe.



One of the selling points of MXNet is (or used to be) speed and having 
multiple
releases in series with speed regressions may not be acceptable to users 
that
adopted MXNet based on the speed advantage. Should we vote on a 1.7 Beta 
release
and only vote on 1.7 final release once the regressions have been fixed?

On Mon, 2020-07-13 at 19:33 +, Patrick Mu wrote:
> It happens only on CPU, and I did more runs and found that the runtime
> fluctuates very badly, but the average regression is ~10%.
>
>
> Through the previous benchmarks I also found some worse regression 
comparing
> 1.6 to 1.5 like inception inference on CPU and those regression was not
> caught.
>
> My 2-cent is it might not be a blocker for the release, and we can have 
room
> for improvement for upcoming 2.0 and 1.7.1 if necessary
>
> Ziyi
>
> On 2020/07/13 08:40:32, "Chen, Ciyong"  wrote:
> > Thanks Ziyi,
> >
> > May I know which platform did you notice the performance regression, 
CPU or
> > GPU? ~20% regression would be a large gap.
> >
> > Thanks,
> > -Ciyong
> >
> > -Original Message-----
    > > From: Patrick Mu 
> > Sent: Monday, July 13, 2020 4:13 PM
> > To: d...@mxnet.apache.org
> > Subject: Re: RE: [VOTE] Release Apache MXNet (incubating) version 
1.7.0.rc0
> >
> > Hi Ciyong,
> >
> > I have reverted the commit, and I am able to train Yolov3 with no 
problem.
> >
> > However I also noticed there is a ~20% regression in 1.7 comparing with 
1.6
> > in inference Yolov3 with Module API, so we are going to discuss 
tomorrow if
> > that would be an issue for 1.7.
> >
> > Thanks,
> > Ziyi
> >
> > On 2020/07/13 02:19:28, "Chen, Ciyong"  wrote:
> > > Hi Ziyi, Xingjian,
> > >
> > > Thanks for reporting the issues from GluonCV/AutoGluon perspective.
> > > I just did a quick try by reverting the
> > > https://github.com/apache/incubator-mxnet/pull/18358, then the 
behavior is
> > > same as 1.6.0 with the cases in the gist (
> > > https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> > >
> > > Considering there's many end-users using Gluon based API/models, and
> > > introducing a new patch to fix this issue could be risky, so I agree 
that
> > > reverting this PR (#18358) might be the best option for the 1.7.0 
release.
> > > But I'm considering is there any other test cases to cover this 
feature,
> > > which could be helpful to track this kind of code changes in future, 
or
> > > can you help to verify if this revert do resolve the broken issue at 
your
> > > side?
> > >
> > > > Thus, the real issue is: Should we supporting pickling a Gluon 
Block? If
> > > > not, should we support combining multiprocessing.pool with the Gluon
> > > > Block?
> > > Seems it's more like a new feature for MXNet Gluon Block, probably we 
can
> > > make it available in the next patch/minor release?
> > >
> > > Thanks,
> > > -Ciyong
> > >
> > > -Original Message-
> > > From: Xingjian SHI 
> > > Sent: Saturday, July 11, 2020 4:27 AM
> > > To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> > > Subject: Re: [VOTE] Release Apache MXNet (incubatin

Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-13 Thread Skalicky, Sam
That’s a good point, 1.6 did have a performance regression since it dropped 
MKLML to simplify build an fix licensing. 2.0 will have performance degradation 
too in favor of new features. Clearly the community is focusing on features 
rather than performance, at least we're consistent :-)

I would prefer we move forward with the 1.7.0 release and consider performance 
fixes for 1.7.1 (like we did for 1.3.1/1.4.1)

Sam

ï»żOn 7/13/20, 1:36 PM, "Leonard Lausen"  wrote:

CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and know the 
content is safe.



One of the selling points of MXNet is (or used to be) speed and having 
multiple
releases in series with speed regressions may not be acceptable to users 
that
adopted MXNet based on the speed advantage. Should we vote on a 1.7 Beta 
release
and only vote on 1.7 final release once the regressions have been fixed?

On Mon, 2020-07-13 at 19:33 +, Patrick Mu wrote:
> It happens only on CPU, and I did more runs and found that the runtime
> fluctuates very badly, but the average regression is ~10%.
>
>
> Through the previous benchmarks I also found some worse regression 
comparing
> 1.6 to 1.5 like inception inference on CPU and those regression was not
> caught.
>
> My 2-cent is it might not be a blocker for the release, and we can have 
room
> for improvement for upcoming 2.0 and 1.7.1 if necessary
>
> Ziyi
>
> On 2020/07/13 08:40:32, "Chen, Ciyong"  wrote:
> > Thanks Ziyi,
> >
> > May I know which platform did you notice the performance regression, 
CPU or
> > GPU? ~20% regression would be a large gap.
> >
> > Thanks,
> > -Ciyong
> >
> > -Original Message-
    > > From: Patrick Mu 
> > Sent: Monday, July 13, 2020 4:13 PM
> > To: d...@mxnet.apache.org
> > Subject: Re: RE: [VOTE] Release Apache MXNet (incubating) version 
1.7.0.rc0
> >
> > Hi Ciyong,
> >
> > I have reverted the commit, and I am able to train Yolov3 with no 
problem.
> >
> > However I also noticed there is a ~20% regression in 1.7 comparing with 
1.6
> > in inference Yolov3 with Module API, so we are going to discuss 
tomorrow if
> > that would be an issue for 1.7.
> >
> > Thanks,
> > Ziyi
> >
> > On 2020/07/13 02:19:28, "Chen, Ciyong"  wrote:
> > > Hi Ziyi, Xingjian,
> > >
> > > Thanks for reporting the issues from GluonCV/AutoGluon perspective.
> > > I just did a quick try by reverting the
> > > https://github.com/apache/incubator-mxnet/pull/18358, then the 
behavior is
> > > same as 1.6.0 with the cases in the gist (
> > > https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> > >
> > > Considering there's many end-users using Gluon based API/models, and
> > > introducing a new patch to fix this issue could be risky, so I agree 
that
> > > reverting this PR (#18358) might be the best option for the 1.7.0 
release.
> > > But I'm considering is there any other test cases to cover this 
feature,
> > > which could be helpful to track this kind of code changes in future, 
or
> > > can you help to verify if this revert do resolve the broken issue at 
your
> > > side?
> > >
> > > > Thus, the real issue is: Should we supporting pickling a Gluon 
Block? If
> > > > not, should we support combining multiprocessing.pool with the Gluon
> > > > Block?
> > > Seems it's more like a new feature for MXNet Gluon Block, probably we 
can
> > > make it available in the next patch/minor release?
> > >
> > > Thanks,
> > > -Ciyong
> > >
> > > -Original Message-
> > > From: Xingjian SHI 
> > > Sent: Saturday, July 11, 2020 4:27 AM
> > > To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> > > Subject: Re: [VOTE] Release Apache MXNet (incubating) version 
1.7.0.rc0
> > >
> > > Thanks Ziyi,
> > >
> > > I've discovered the same issue when I'm trying to use AutoGluon with
> > > 1.7.0rc0 and would like to share my finding:
> > >
> > > Basically, I don't think Gluon Block is designed to be pickleble. But
> > > pickling do work for some cases in the old version:
 

RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-13 Thread Leonard Lausen
One of the selling points of MXNet is (or used to be) speed and having multiple
releases in series with speed regressions may not be acceptable to users that
adopted MXNet based on the speed advantage. Should we vote on a 1.7 Beta release
and only vote on 1.7 final release once the regressions have been fixed?

On Mon, 2020-07-13 at 19:33 +, Patrick Mu wrote:
> It happens only on CPU, and I did more runs and found that the runtime
> fluctuates very badly, but the average regression is ~10%.
> 
> 
> Through the previous benchmarks I also found some worse regression comparing
> 1.6 to 1.5 like inception inference on CPU and those regression was not
> caught.
> 
> My 2-cent is it might not be a blocker for the release, and we can have room
> for improvement for upcoming 2.0 and 1.7.1 if necessary
> 
> Ziyi
> 
> On 2020/07/13 08:40:32, "Chen, Ciyong"  wrote:
> > Thanks Ziyi,
> > 
> > May I know which platform did you notice the performance regression, CPU or
> > GPU? ~20% regression would be a large gap.
> > 
> > Thanks,
> > -Ciyong
> > 
> > -Original Message-
> > From: Patrick Mu 
> > Sent: Monday, July 13, 2020 4:13 PM
> > To: d...@mxnet.apache.org
> > Subject: Re: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> > 
> > Hi Ciyong,
> > 
> > I have reverted the commit, and I am able to train Yolov3 with no problem.
> > 
> > However I also noticed there is a ~20% regression in 1.7 comparing with 1.6
> > in inference Yolov3 with Module API, so we are going to discuss tomorrow if
> > that would be an issue for 1.7.
> > 
> > Thanks,
> > Ziyi
> > 
> > On 2020/07/13 02:19:28, "Chen, Ciyong"  wrote:
> > > Hi Ziyi, Xingjian,
> > > 
> > > Thanks for reporting the issues from GluonCV/AutoGluon perspective.
> > > I just did a quick try by reverting the 
> > > https://github.com/apache/incubator-mxnet/pull/18358, then the behavior is
> > > same as 1.6.0 with the cases in the gist (
> > > https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> > > 
> > > Considering there's many end-users using Gluon based API/models, and
> > > introducing a new patch to fix this issue could be risky, so I agree that
> > > reverting this PR (#18358) might be the best option for the 1.7.0 release.
> > > But I'm considering is there any other test cases to cover this feature,
> > > which could be helpful to track this kind of code changes in future, or
> > > can you help to verify if this revert do resolve the broken issue at your
> > > side?
> > > 
> > > > Thus, the real issue is: Should we supporting pickling a Gluon Block? If
> > > > not, should we support combining multiprocessing.pool with the Gluon
> > > > Block?
> > > Seems it's more like a new feature for MXNet Gluon Block, probably we can
> > > make it available in the next patch/minor release?
> > > 
> > > Thanks,
> > > -Ciyong
> > > 
> > > -Original Message-
> > > From: Xingjian SHI 
> > > Sent: Saturday, July 11, 2020 4:27 AM
> > > To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> > > Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> > > 
> > > Thanks Ziyi,
> > > 
> > > I've discovered the same issue when I'm trying to use AutoGluon with
> > > 1.7.0rc0 and would like to share my finding:
> > > 
> > > Basically, I don't think Gluon Block is designed to be pickleble. But
> > > pickling do work for some cases in the old version:
> > > 
> > > I've included two cases in the gist (
> > > https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> > > 
> > > - Case1: we construct a gluon block, hybridize it and feed one NDArray to
> > > help initialize the block. After that, it will no longer be pickleble.
> > > - Case2: we just construct a gluon block and it will be pickleble in
> > > 1.6.0, but won't be pickleble in 1.7.0.
> > > 
> > > Thus, the real issue is: Should we supporting pickling a Gluon Block? If
> > > not, should we support combining multiprocessing.pool with the Gluon
> > > Block? For reference, PyTorch supports pickling the nn.Module as shown in:
> > > https://gist.github.com/sxjscience/90b812a66d445e759c55eedc3ef93668 and
> > > also in the doc (
> > > https://pytorch.org/tutorials/beginner/saving_loading_models.html).
> > > 
> &

Re: RE: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-13 Thread Patrick Mu
It happens only on CPU, and I did more runs and found that the runtime 
fluctuates very badly, but the average regression is ~10%. 

Through the previous benchmarks I also found some worse regression comparing 
1.6 to 1.5 like inception inference on CPU and those regression was not caught. 

My 2-cent is it might not be a blocker for the release, and we can have room 
for improvement for upcoming 2.0 and 1.7.1 if necessary

Ziyi

On 2020/07/13 08:40:32, "Chen, Ciyong"  wrote: 
> Thanks Ziyi,
> 
> May I know which platform did you notice the performance regression, CPU or 
> GPU? ~20% regression would be a large gap.
> 
> Thanks,
> -Ciyong
> 
> -Original Message-
> From: Patrick Mu  
> Sent: Monday, July 13, 2020 4:13 PM
> To: d...@mxnet.apache.org
> Subject: Re: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> 
> Hi Ciyong,
> 
> I have reverted the commit, and I am able to train Yolov3 with no problem.
> 
> However I also noticed there is a ~20% regression in 1.7 comparing with 1.6 
> in inference Yolov3 with Module API, so we are going to discuss tomorrow if 
> that would be an issue for 1.7.
> 
> Thanks,
> Ziyi
> 
> On 2020/07/13 02:19:28, "Chen, Ciyong"  wrote: 
> > Hi Ziyi, Xingjian,
> > 
> > Thanks for reporting the issues from GluonCV/AutoGluon perspective.
> > I just did a quick try by reverting the 
> > https://github.com/apache/incubator-mxnet/pull/18358, then the behavior is 
> > same as 1.6.0 with the cases in the gist 
> > (https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> > 
> > Considering there's many end-users using Gluon based API/models, and 
> > introducing a new patch to fix this issue could be risky, so I agree that 
> > reverting this PR (#18358) might be the best option for the 1.7.0 release.
> > But I'm considering is there any other test cases to cover this feature, 
> > which could be helpful to track this kind of code changes in future, or can 
> > you help to verify if this revert do resolve the broken issue at your side?
> > 
> > > Thus, the real issue is: Should we supporting pickling a Gluon Block? If 
> > > not, should we support combining multiprocessing.pool with the Gluon 
> > > Block?
> > Seems it's more like a new feature for MXNet Gluon Block, probably we can 
> > make it available in the next patch/minor release?
> > 
> > Thanks,
> > -Ciyong
> > 
> > -Original Message-
> > From: Xingjian SHI  
> > Sent: Saturday, July 11, 2020 4:27 AM
> > To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> > Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> > 
> > Thanks Ziyi,
> > 
> > I've discovered the same issue when I'm trying to use AutoGluon with 
> > 1.7.0rc0 and would like to share my finding:
> > 
> > Basically, I don't think Gluon Block is designed to be pickleble. But 
> > pickling do work for some cases in the old version:
> > 
> > I've included two cases in the gist 
> > (https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> > 
> > - Case1: we construct a gluon block, hybridize it and feed one NDArray to 
> > help initialize the block. After that, it will no longer be pickleble. 
> > - Case2: we just construct a gluon block and it will be pickleble in 1.6.0, 
> > but won't be pickleble in 1.7.0.
> > 
> > Thus, the real issue is: Should we supporting pickling a Gluon Block? If 
> > not, should we support combining multiprocessing.pool with the Gluon Block? 
> > For reference, PyTorch supports pickling the nn.Module as shown in: 
> > https://gist.github.com/sxjscience/90b812a66d445e759c55eedc3ef93668 and 
> > also in the doc 
> > (https://pytorch.org/tutorials/beginner/saving_loading_models.html). 
> > 
> > Best,
> > Xingjian
> > 
> > 
> > ï»żOn 7/10/20, 11:31 AM, "Patrick Mu"  wrote:
> > 
> > Hi Ciyong, 
> > 
> > I just discovered an issue with the 1.7, which causes the Yolo training 
> > with latest Gluon CV Yolo to fail.
> > 
> > The PR that causes the failure is 
> > https://github.com/apache/incubator-mxnet/pull/18358, which modifies  basic 
> > blocks of Gluon to fix a memory leak issue.
> > 
> > Talked with Leonard, the author of the PR, and he said he found the 
> > root cause, but patching that PR would modifies those Gluon basic blocks 
> > further, which might be risky towards existing models and various customer 
> > models.
> > 
> > So my 2-cents i

Re: Requesting slack access

2020-07-13 Thread Chaitanya Bapat
Hello Leandro,

Welcome to the MXNet Community.
I've sent you the invite to the slack channel.

Thanks
Chai

On Sun, 12 Jul 2020 at 17:45, Leandro Campos 
wrote:

> Hi,
>
> I'd like to request access to the Slack of MXNet.
>
> Thanks,
>
> Leandro Campos
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



RE: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-13 Thread Chen, Ciyong
Thanks Ziyi,

May I know which platform did you notice the performance regression, CPU or 
GPU? ~20% regression would be a large gap.

Thanks,
-Ciyong

-Original Message-
From: Patrick Mu  
Sent: Monday, July 13, 2020 4:13 PM
To: d...@mxnet.apache.org
Subject: Re: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

Hi Ciyong,

I have reverted the commit, and I am able to train Yolov3 with no problem.

However I also noticed there is a ~20% regression in 1.7 comparing with 1.6 in 
inference Yolov3 with Module API, so we are going to discuss tomorrow if that 
would be an issue for 1.7.

Thanks,
Ziyi

On 2020/07/13 02:19:28, "Chen, Ciyong"  wrote: 
> Hi Ziyi, Xingjian,
> 
> Thanks for reporting the issues from GluonCV/AutoGluon perspective.
> I just did a quick try by reverting the 
> https://github.com/apache/incubator-mxnet/pull/18358, then the behavior is 
> same as 1.6.0 with the cases in the gist 
> (https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> 
> Considering there's many end-users using Gluon based API/models, and 
> introducing a new patch to fix this issue could be risky, so I agree that 
> reverting this PR (#18358) might be the best option for the 1.7.0 release.
> But I'm considering is there any other test cases to cover this feature, 
> which could be helpful to track this kind of code changes in future, or can 
> you help to verify if this revert do resolve the broken issue at your side?
> 
> > Thus, the real issue is: Should we supporting pickling a Gluon Block? If 
> > not, should we support combining multiprocessing.pool with the Gluon Block?
> Seems it's more like a new feature for MXNet Gluon Block, probably we can 
> make it available in the next patch/minor release?
> 
> Thanks,
> -Ciyong
> 
> -Original Message-
> From: Xingjian SHI  
> Sent: Saturday, July 11, 2020 4:27 AM
> To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> 
> Thanks Ziyi,
> 
> I've discovered the same issue when I'm trying to use AutoGluon with 1.7.0rc0 
> and would like to share my finding:
> 
> Basically, I don't think Gluon Block is designed to be pickleble. But 
> pickling do work for some cases in the old version:
> 
> I've included two cases in the gist 
> (https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> 
> - Case1: we construct a gluon block, hybridize it and feed one NDArray to 
> help initialize the block. After that, it will no longer be pickleble. 
> - Case2: we just construct a gluon block and it will be pickleble in 1.6.0, 
> but won't be pickleble in 1.7.0.
> 
> Thus, the real issue is: Should we supporting pickling a Gluon Block? If not, 
> should we support combining multiprocessing.pool with the Gluon Block? For 
> reference, PyTorch supports pickling the nn.Module as shown in: 
> https://gist.github.com/sxjscience/90b812a66d445e759c55eedc3ef93668 and also 
> in the doc 
> (https://pytorch.org/tutorials/beginner/saving_loading_models.html). 
> 
> Best,
> Xingjian
> 
> 
> ï»żOn 7/10/20, 11:31 AM, "Patrick Mu"  wrote:
> 
> Hi Ciyong, 
> 
> I just discovered an issue with the 1.7, which causes the Yolo training 
> with latest Gluon CV Yolo to fail.
> 
> The PR that causes the failure is 
> https://github.com/apache/incubator-mxnet/pull/18358, which modifies  basic 
> blocks of Gluon to fix a memory leak issue.
> 
> Talked with Leonard, the author of the PR, and he said he found the root 
> cause, but patching that PR would modifies those Gluon basic blocks further, 
> which might be risky towards existing models and various customer models.
> 
> So my 2-cents is reverting this PR in 1.7, and try patching the PR in 1.x 
> and 2.0, meaning that the 1.7 won't have memory usage optimized by that 
> feature.
> 
> I'd like to hear what you think about this issue.
> 
> Thanks,
> Ziyi
> 
> 
> On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote: 
> > Hi Community,
> > 
> > I would like to call for action to test/validate/vote for the release 
> candidate (1.7.0.rc0)
> > As there's not any voting result during the scheduled time window, I 
> would like to extend the time windows to July 13, 23:59:59 PST.
> > Please prepare your time and provide feedback if you've tried with the 
> pre-release code bases, thanks!
> > 
> > Best regards,
> > Ciyong
> > 
> > -Original Message-
> > From: Chen, Ciyong  
> > Sent: Monday, July 6, 2020 10:48 PM
> &

Re: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-13 Thread Patrick Mu
Hi Ciyong,

I have reverted the commit, and I am able to train Yolov3 with no problem.

However I also noticed there is a ~20% regression in 1.7 comparing with 1.6 in 
inference Yolov3 with Module API, so we are going to discuss tomorrow if that 
would be an issue for 1.7.

Thanks,
Ziyi

On 2020/07/13 02:19:28, "Chen, Ciyong"  wrote: 
> Hi Ziyi, Xingjian,
> 
> Thanks for reporting the issues from GluonCV/AutoGluon perspective.
> I just did a quick try by reverting the 
> https://github.com/apache/incubator-mxnet/pull/18358, then the behavior is 
> same as 1.6.0 with the cases in the gist 
> (https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> 
> Considering there's many end-users using Gluon based API/models, and 
> introducing a new patch to fix this issue could be risky, so I agree that 
> reverting this PR (#18358) might be the best option for the 1.7.0 release.
> But I'm considering is there any other test cases to cover this feature, 
> which could be helpful to track this kind of code changes in future, or can 
> you help to verify if this revert do resolve the broken issue at your side?
> 
> > Thus, the real issue is: Should we supporting pickling a Gluon Block? If 
> > not, should we support combining multiprocessing.pool with the Gluon Block?
> Seems it's more like a new feature for MXNet Gluon Block, probably we can 
> make it available in the next patch/minor release?
> 
> Thanks,
> -Ciyong
> 
> -Original Message-
> From: Xingjian SHI  
> Sent: Saturday, July 11, 2020 4:27 AM
> To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> 
> Thanks Ziyi,
> 
> I've discovered the same issue when I'm trying to use AutoGluon with 1.7.0rc0 
> and would like to share my finding:
> 
> Basically, I don't think Gluon Block is designed to be pickleble. But 
> pickling do work for some cases in the old version:
> 
> I've included two cases in the gist 
> (https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).
> 
> - Case1: we construct a gluon block, hybridize it and feed one NDArray to 
> help initialize the block. After that, it will no longer be pickleble. 
> - Case2: we just construct a gluon block and it will be pickleble in 1.6.0, 
> but won't be pickleble in 1.7.0.
> 
> Thus, the real issue is: Should we supporting pickling a Gluon Block? If not, 
> should we support combining multiprocessing.pool with the Gluon Block? For 
> reference, PyTorch supports pickling the nn.Module as shown in: 
> https://gist.github.com/sxjscience/90b812a66d445e759c55eedc3ef93668 and also 
> in the doc 
> (https://pytorch.org/tutorials/beginner/saving_loading_models.html). 
> 
> Best,
> Xingjian
> 
> 
> ï»żOn 7/10/20, 11:31 AM, "Patrick Mu"  wrote:
> 
> Hi Ciyong, 
> 
> I just discovered an issue with the 1.7, which causes the Yolo training 
> with latest Gluon CV Yolo to fail.
> 
> The PR that causes the failure is 
> https://github.com/apache/incubator-mxnet/pull/18358, which modifies  basic 
> blocks of Gluon to fix a memory leak issue.
> 
> Talked with Leonard, the author of the PR, and he said he found the root 
> cause, but patching that PR would modifies those Gluon basic blocks further, 
> which might be risky towards existing models and various customer models.
> 
> So my 2-cents is reverting this PR in 1.7, and try patching the PR in 1.x 
> and 2.0, meaning that the 1.7 won't have memory usage optimized by that 
> feature.
> 
> I'd like to hear what you think about this issue.
> 
> Thanks,
> Ziyi
> 
> 
> On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote: 
> > Hi Community,
> > 
> > I would like to call for action to test/validate/vote for the release 
> candidate (1.7.0.rc0)
> > As there's not any voting result during the scheduled time window, I 
> would like to extend the time windows to July 13, 23:59:59 PST.
> > Please prepare your time and provide feedback if you've tried with the 
> pre-release code bases, thanks!
> > 
> > Best regards,
> > Ciyong
> > 
> > -Original Message-
> > From: Chen, Ciyong  
> > Sent: Monday, July 6, 2020 10:48 PM
> > To: d...@mxnet.apache.org
> > Cc: Bob Paulin ; Henri Yandell ; 
> Jason Dai ; Markus Weimer ; Michael 
> Wall 
> > Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> > 
> > For the language bindings and windows platform, may I have your support 
> to help verify these fea

RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-12 Thread Chen, Ciyong
Thanks Macro for raising up the concern of license and Sheng for the 
clarification, the current release process is only targeting source release.

Regards,
-Ciyong

-Original Message-
From: Marco de Abreu  
Sent: Monday, July 13, 2020 4:51 AM
To: dev@mxnet.incubator.apache.org
Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

Okay, thanks for the clarification!

-Marco

On Sun, Jul 12, 2020, 5:58 PM Sheng Zha  wrote:

> Hi Marco,
>
> Since the license issues apply to binary distribution, we should still 
> be able to make official source releases.
>
> Regards,
> Sheng
>
> > On Jul 12, 2020, at 1:10 AM, Marco de Abreu 
> > 
> wrote:
> >
> > ï»żAre we in the position to make a release given that we have open 
> > license issues with the ipmc and Apache board? I want to avoid 
> > giving the impression that we are ignoring their requests - my 
> > current understanding is that we are non compliant.
> >
> > -Marco
> >
> >> On Sat, Jul 11, 2020, 9:46 AM Tong He  wrote:
> >>
> >> My +1 on the R binding.
> >>
> >> Tested with
> >>
> >> - Build from source
> >> - Install the R package and check it passed all tests.
> >>
> >>> On 2020/07/10 18:31:27, Patrick Mu  wrote:
> >>> Hi Ciyong,
> >>>
> >>> I just discovered an issue with the 1.7, which causes the Yolo 
> >>> training
> >> with latest Gluon CV Yolo to fail.
> >>>
> >>> The PR that causes the failure is
> >> https://github.com/apache/incubator-mxnet/pull/18358, which 
> >> modifies basic blocks of Gluon to fix a memory leak issue.
> >>>
> >>> Talked with Leonard, the author of the PR, and he said he found 
> >>> the
> root
> >> cause, but patching that PR would modifies those Gluon basic blocks 
> >> further, which might be risky towards existing models and various
> customer
> >> models.
> >>>
> >>> So my 2-cents is reverting this PR in 1.7, and try patching the PR 
> >>> in
> >> 1.x and 2.0, meaning that the 1.7 won't have memory usage optimized 
> >> by
> that
> >> feature.
> >>>
> >>> I'd like to hear what you think about this issue.
> >>>
> >>> Thanks,
> >>> Ziyi
> >>>
> >>>
> >>> On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote:
> >>>> Hi Community,
> >>>>
> >>>> I would like to call for action to test/validate/vote for the 
> >>>> release
> >> candidate (1.7.0.rc0)
> >>>> As there's not any voting result during the scheduled time 
> >>>> window, I
> >> would like to extend the time windows to July 13, 23:59:59 PST.
> >>>> Please prepare your time and provide feedback if you've tried 
> >>>> with the
> >> pre-release code bases, thanks!
> >>>>
> >>>> Best regards,
> >>>> Ciyong
> >>>>
> >>>> -Original Message-
> >>>> From: Chen, Ciyong 
> >>>> Sent: Monday, July 6, 2020 10:48 PM
> >>>> To: d...@mxnet.apache.org
> >>>> Cc: Bob Paulin ; Henri Yandell 
> >>>> ;
> >> Jason Dai ; Markus Weimer ; 
> >> Michael Wall 
> >>>> Subject: RE: [VOTE] Release Apache MXNet (incubating) version
> 1.7.0.rc0
> >>>>
> >>>> For the language bindings and windows platform, may I have your
> >> support to help verify these features? Thanks!
> >>>>
> >>>> @lanking520 to help verify the Scala/Java @gigasquid to help 
> >>>> verify
> >> the Clojure
> >>>> @hetong007 to help verify the R
> >>>> @yajiedesign to help verify the windows platform
> >>>>
> >>>> Best regards,
> >>>> Ciyong Chen
> >>>>
> >>>> -Original Message-
> >>>> From: Chen, Ciyong 
> >>>> Sent: Monday, July 6, 2020 10:39 PM
> >>>> To: d...@mxnet.apache.org
> >>>> Cc: Bob Paulin ; Henri Yandell 
> >>>> ;
> >> Jason Dai ; Markus Weimer ; 
> >> Michael Wall 
> >>>> Subject: [VOTE] Release Apache MXNet (incubating) version 
> >>>> 1.7.0.rc0
> >>>>
> >>>> Dear MXNet community,
> >>>>
> >>>> This is the vote to release Apache MXNet (incubating)

RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-12 Thread Chen, Ciyong
Hi Ziyi, Xingjian,

Thanks for reporting the issues from GluonCV/AutoGluon perspective.
I just did a quick try by reverting the 
https://github.com/apache/incubator-mxnet/pull/18358, then the behavior is same 
as 1.6.0 with the cases in the gist 
(https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).

Considering there's many end-users using Gluon based API/models, and 
introducing a new patch to fix this issue could be risky, so I agree that 
reverting this PR (#18358) might be the best option for the 1.7.0 release.
But I'm considering is there any other test cases to cover this feature, which 
could be helpful to track this kind of code changes in future, or can you help 
to verify if this revert do resolve the broken issue at your side?

> Thus, the real issue is: Should we supporting pickling a Gluon Block? If not, 
> should we support combining multiprocessing.pool with the Gluon Block?
Seems it's more like a new feature for MXNet Gluon Block, probably we can make 
it available in the next patch/minor release?

Thanks,
-Ciyong

-Original Message-
From: Xingjian SHI  
Sent: Saturday, July 11, 2020 4:27 AM
To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

Thanks Ziyi,

I've discovered the same issue when I'm trying to use AutoGluon with 1.7.0rc0 
and would like to share my finding:

Basically, I don't think Gluon Block is designed to be pickleble. But pickling 
do work for some cases in the old version:

I've included two cases in the gist 
(https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).

- Case1: we construct a gluon block, hybridize it and feed one NDArray to help 
initialize the block. After that, it will no longer be pickleble. 
- Case2: we just construct a gluon block and it will be pickleble in 1.6.0, but 
won't be pickleble in 1.7.0.

Thus, the real issue is: Should we supporting pickling a Gluon Block? If not, 
should we support combining multiprocessing.pool with the Gluon Block? For 
reference, PyTorch supports pickling the nn.Module as shown in: 
https://gist.github.com/sxjscience/90b812a66d445e759c55eedc3ef93668 and also in 
the doc (https://pytorch.org/tutorials/beginner/saving_loading_models.html). 

Best,
Xingjian


ï»żOn 7/10/20, 11:31 AM, "Patrick Mu"  wrote:

Hi Ciyong, 

I just discovered an issue with the 1.7, which causes the Yolo training 
with latest Gluon CV Yolo to fail.

The PR that causes the failure is 
https://github.com/apache/incubator-mxnet/pull/18358, which modifies  basic 
blocks of Gluon to fix a memory leak issue.

Talked with Leonard, the author of the PR, and he said he found the root 
cause, but patching that PR would modifies those Gluon basic blocks further, 
which might be risky towards existing models and various customer models.

So my 2-cents is reverting this PR in 1.7, and try patching the PR in 1.x 
and 2.0, meaning that the 1.7 won't have memory usage optimized by that feature.

I'd like to hear what you think about this issue.

Thanks,
Ziyi


On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote: 
> Hi Community,
> 
> I would like to call for action to test/validate/vote for the release 
candidate (1.7.0.rc0)
> As there's not any voting result during the scheduled time window, I 
would like to extend the time windows to July 13, 23:59:59 PST.
> Please prepare your time and provide feedback if you've tried with the 
pre-release code bases, thanks!
> 
> Best regards,
> Ciyong
> 
> -Original Message-
> From: Chen, Ciyong  
> Sent: Monday, July 6, 2020 10:48 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; 
Jason Dai ; Markus Weimer ; Michael 
Wall 
> Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> 
> For the language bindings and windows platform, may I have your support 
to help verify these features? Thanks!
> 
> @lanking520 to help verify the Scala/Java @gigasquid to help verify the 
Clojure
> @hetong007 to help verify the R
> @yajiedesign to help verify the windows platform
> 
> Best regards,
> Ciyong Chen
> 
> -Original Message-
> From: Chen, Ciyong 
> Sent: Monday, July 6, 2020 10:39 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; 
Jason Dai ; Markus Weimer ; Michael 
Wall 
> Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> 
> Dear MXNet community,
> 
> This is the vote to release Apache MXNet (incubating) version 1.7.0. 
Voting will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.
> 
> Link to release notes:
> https://cwiki.apache.org/confluen

Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-12 Thread Skalicky, Sam
+1

Tested:
- Make flow building from source: example/extensions all work correctly
- staticbuild flow cpu & cu102 variants with custom extension library

Sam

ï»żOn 7/12/20, 1:52 PM, "Marco de Abreu"  wrote:

CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and know the 
content is safe.



Okay, thanks for the clarification!

-Marco

On Sun, Jul 12, 2020, 5:58 PM Sheng Zha  wrote:

> Hi Marco,
>
> Since the license issues apply to binary distribution, we should still be
> able to make official source releases.
>
> Regards,
> Sheng
>
> > On Jul 12, 2020, at 1:10 AM, Marco de Abreu 
> wrote:
> >
> > Are we in the position to make a release given that we have open license
> > issues with the ipmc and Apache board? I want to avoid giving the
> > impression that we are ignoring their requests - my current 
understanding
> > is that we are non compliant.
> >
> > -Marco
> >
> >> On Sat, Jul 11, 2020, 9:46 AM Tong He  wrote:
> >>
> >> My +1 on the R binding.
> >>
> >> Tested with
> >>
> >> - Build from source
> >> - Install the R package and check it passed all tests.
> >>
> >>> On 2020/07/10 18:31:27, Patrick Mu  wrote:
> >>> Hi Ciyong,
> >>>
> >>> I just discovered an issue with the 1.7, which causes the Yolo 
training
> >> with latest Gluon CV Yolo to fail.
> >>>
> >>> The PR that causes the failure is
> >> https://github.com/apache/incubator-mxnet/pull/18358, which modifies
> >> basic blocks of Gluon to fix a memory leak issue.
> >>>
> >>> Talked with Leonard, the author of the PR, and he said he found the
> root
> >> cause, but patching that PR would modifies those Gluon basic blocks
> >> further, which might be risky towards existing models and various
> customer
> >> models.
> >>>
> >>> So my 2-cents is reverting this PR in 1.7, and try patching the PR in
> >> 1.x and 2.0, meaning that the 1.7 won't have memory usage optimized by
> that
> >> feature.
> >>>
> >>> I'd like to hear what you think about this issue.
> >>>
> >>> Thanks,
> >>> Ziyi
> >>>
> >>>
> >>> On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote:
> >>>> Hi Community,
> >>>>
> >>>> I would like to call for action to test/validate/vote for the release
> >> candidate (1.7.0.rc0)
> >>>> As there's not any voting result during the scheduled time window, I
> >> would like to extend the time windows to July 13, 23:59:59 PST.
> >>>> Please prepare your time and provide feedback if you've tried with 
the
> >> pre-release code bases, thanks!
> >>>>
> >>>> Best regards,
> >>>> Ciyong
> >>>>
> >>>> -Original Message-
> >>>> From: Chen, Ciyong 
> >>>> Sent: Monday, July 6, 2020 10:48 PM
> >>>> To: d...@mxnet.apache.org
> >>>> Cc: Bob Paulin ; Henri Yandell ;
> >> Jason Dai ; Markus Weimer ;
> >> Michael Wall 
> >>>> Subject: RE: [VOTE] Release Apache MXNet (incubating) version
> 1.7.0.rc0
> >>>>
> >>>> For the language bindings and windows platform, may I have your
> >> support to help verify these features? Thanks!
> >>>>
> >>>> @lanking520 to help verify the Scala/Java @gigasquid to help verify
> >> the Clojure
> >>>> @hetong007 to help verify the R
> >>>> @yajiedesign to help verify the windows platform
> >>>>
> >>>> Best regards,
> >>>> Ciyong Chen
> >>>>
> >>>> -Original Message-
> >>>> From: Chen, Ciyong 
> >>>> Sent: Monday, July 6, 2020 10:39 PM
> >>>> To: d...@mxnet.apache.org
> >>>> Cc: Bob Paulin ; Henri Yandell ;
> >> Jason Dai ; Markus Weimer ;
> >> Michael Wall 
> >>>> Subject: [VOTE] Release Apache MXNet (in

Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-12 Thread Marco de Abreu
Okay, thanks for the clarification!

-Marco

On Sun, Jul 12, 2020, 5:58 PM Sheng Zha  wrote:

> Hi Marco,
>
> Since the license issues apply to binary distribution, we should still be
> able to make official source releases.
>
> Regards,
> Sheng
>
> > On Jul 12, 2020, at 1:10 AM, Marco de Abreu 
> wrote:
> >
> > ï»żAre we in the position to make a release given that we have open license
> > issues with the ipmc and Apache board? I want to avoid giving the
> > impression that we are ignoring their requests - my current understanding
> > is that we are non compliant.
> >
> > -Marco
> >
> >> On Sat, Jul 11, 2020, 9:46 AM Tong He  wrote:
> >>
> >> My +1 on the R binding.
> >>
> >> Tested with
> >>
> >> - Build from source
> >> - Install the R package and check it passed all tests.
> >>
> >>> On 2020/07/10 18:31:27, Patrick Mu  wrote:
> >>> Hi Ciyong,
> >>>
> >>> I just discovered an issue with the 1.7, which causes the Yolo training
> >> with latest Gluon CV Yolo to fail.
> >>>
> >>> The PR that causes the failure is
> >> https://github.com/apache/incubator-mxnet/pull/18358, which modifies
> >> basic blocks of Gluon to fix a memory leak issue.
> >>>
> >>> Talked with Leonard, the author of the PR, and he said he found the
> root
> >> cause, but patching that PR would modifies those Gluon basic blocks
> >> further, which might be risky towards existing models and various
> customer
> >> models.
> >>>
> >>> So my 2-cents is reverting this PR in 1.7, and try patching the PR in
> >> 1.x and 2.0, meaning that the 1.7 won't have memory usage optimized by
> that
> >> feature.
> >>>
> >>> I'd like to hear what you think about this issue.
> >>>
> >>> Thanks,
> >>> Ziyi
> >>>
> >>>
> >>> On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote:
> >>>> Hi Community,
> >>>>
> >>>> I would like to call for action to test/validate/vote for the release
> >> candidate (1.7.0.rc0)
> >>>> As there's not any voting result during the scheduled time window, I
> >> would like to extend the time windows to July 13, 23:59:59 PST.
> >>>> Please prepare your time and provide feedback if you've tried with the
> >> pre-release code bases, thanks!
> >>>>
> >>>> Best regards,
> >>>> Ciyong
> >>>>
> >>>> -Original Message-
> >>>> From: Chen, Ciyong 
> >>>> Sent: Monday, July 6, 2020 10:48 PM
> >>>> To: d...@mxnet.apache.org
> >>>> Cc: Bob Paulin ; Henri Yandell ;
> >> Jason Dai ; Markus Weimer ;
> >> Michael Wall 
> >>>> Subject: RE: [VOTE] Release Apache MXNet (incubating) version
> 1.7.0.rc0
> >>>>
> >>>> For the language bindings and windows platform, may I have your
> >> support to help verify these features? Thanks!
> >>>>
> >>>> @lanking520 to help verify the Scala/Java @gigasquid to help verify
> >> the Clojure
> >>>> @hetong007 to help verify the R
> >>>> @yajiedesign to help verify the windows platform
> >>>>
> >>>> Best regards,
> >>>> Ciyong Chen
> >>>>
> >>>> -Original Message-
> >>>> From: Chen, Ciyong 
> >>>> Sent: Monday, July 6, 2020 10:39 PM
> >>>> To: d...@mxnet.apache.org
> >>>> Cc: Bob Paulin ; Henri Yandell ;
> >> Jason Dai ; Markus Weimer ;
> >> Michael Wall 
> >>>> Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> >>>>
> >>>> Dear MXNet community,
> >>>>
> >>>> This is the vote to release Apache MXNet (incubating) version 1.7.0.
> >> Voting will start July 6, 23:59:59 PST and close on July 9, 23:59:59
> PST.
> >>>>
> >>>> Link to release notes:
> >>>> https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
> >>>>
> >>>> Link to release candidate:
> >>>> https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
> >>>>
> >>>> Link to source and signatures on apache dist server:
> >>>> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<
> >> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
> >>>>
> >>>> Please remember to TEST first before voting accordingly:
> >>>> +1 = approve
> >>>> +0 = no opinion
> >>>> -1 = disapprove (provide reason)
> >>>>
> >>>> Additional notes:
> >>>>
> >>>>  *   There was an issue and discussion[1] regarding on a few numpy
> >> operators failed due to numpy 1.19.0 released on Jun 20, 2020, which
> exists
> >> in all branches (works with numpy <= 1.18.5). As numpy operator is
> still an
> >> experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0
> >> release, so I decided to not block the voting and instead let the
> Community
> >> decide whether this is a blocker for the release.
> >>>>
> >>>> [1] https://github.com/apache/incubator-mxnet/issues/18600
> >>>>
> >>>> Best regards,
> >>>> Ciyong Chen
> >>>>
> >>>>
> >>>
> >>
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-12 Thread Sheng Zha
Hi Marco,

Since the license issues apply to binary distribution, we should still be able 
to make official source releases.

Regards,
Sheng

> On Jul 12, 2020, at 1:10 AM, Marco de Abreu  wrote:
> 
> ï»żAre we in the position to make a release given that we have open license
> issues with the ipmc and Apache board? I want to avoid giving the
> impression that we are ignoring their requests - my current understanding
> is that we are non compliant.
> 
> -Marco
> 
>> On Sat, Jul 11, 2020, 9:46 AM Tong He  wrote:
>> 
>> My +1 on the R binding.
>> 
>> Tested with
>> 
>> - Build from source
>> - Install the R package and check it passed all tests.
>> 
>>> On 2020/07/10 18:31:27, Patrick Mu  wrote:
>>> Hi Ciyong,
>>> 
>>> I just discovered an issue with the 1.7, which causes the Yolo training
>> with latest Gluon CV Yolo to fail.
>>> 
>>> The PR that causes the failure is
>> https://github.com/apache/incubator-mxnet/pull/18358, which modifies
>> basic blocks of Gluon to fix a memory leak issue.
>>> 
>>> Talked with Leonard, the author of the PR, and he said he found the root
>> cause, but patching that PR would modifies those Gluon basic blocks
>> further, which might be risky towards existing models and various customer
>> models.
>>> 
>>> So my 2-cents is reverting this PR in 1.7, and try patching the PR in
>> 1.x and 2.0, meaning that the 1.7 won't have memory usage optimized by that
>> feature.
>>> 
>>> I'd like to hear what you think about this issue.
>>> 
>>> Thanks,
>>> Ziyi
>>> 
>>> 
>>> On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote:
>>>> Hi Community,
>>>> 
>>>> I would like to call for action to test/validate/vote for the release
>> candidate (1.7.0.rc0)
>>>> As there's not any voting result during the scheduled time window, I
>> would like to extend the time windows to July 13, 23:59:59 PST.
>>>> Please prepare your time and provide feedback if you've tried with the
>> pre-release code bases, thanks!
>>>> 
>>>> Best regards,
>>>> Ciyong
>>>> 
>>>> -Original Message-
>>>> From: Chen, Ciyong 
>>>> Sent: Monday, July 6, 2020 10:48 PM
>>>> To: d...@mxnet.apache.org
>>>> Cc: Bob Paulin ; Henri Yandell ;
>> Jason Dai ; Markus Weimer ;
>> Michael Wall 
>>>> Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
>>>> 
>>>> For the language bindings and windows platform, may I have your
>> support to help verify these features? Thanks!
>>>> 
>>>> @lanking520 to help verify the Scala/Java @gigasquid to help verify
>> the Clojure
>>>> @hetong007 to help verify the R
>>>> @yajiedesign to help verify the windows platform
>>>> 
>>>> Best regards,
>>>> Ciyong Chen
>>>> 
>>>> -Original Message-
>>>> From: Chen, Ciyong 
>>>> Sent: Monday, July 6, 2020 10:39 PM
>>>> To: d...@mxnet.apache.org
>>>> Cc: Bob Paulin ; Henri Yandell ;
>> Jason Dai ; Markus Weimer ;
>> Michael Wall 
>>>> Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
>>>> 
>>>> Dear MXNet community,
>>>> 
>>>> This is the vote to release Apache MXNet (incubating) version 1.7.0.
>> Voting will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.
>>>> 
>>>> Link to release notes:
>>>> https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
>>>> 
>>>> Link to release candidate:
>>>> https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
>>>> 
>>>> Link to source and signatures on apache dist server:
>>>> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<
>> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
>>>> 
>>>> Please remember to TEST first before voting accordingly:
>>>> +1 = approve
>>>> +0 = no opinion
>>>> -1 = disapprove (provide reason)
>>>> 
>>>> Additional notes:
>>>> 
>>>>  *   There was an issue and discussion[1] regarding on a few numpy
>> operators failed due to numpy 1.19.0 released on Jun 20, 2020, which exists
>> in all branches (works with numpy <= 1.18.5). As numpy operator is still an
>> experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0
>> release, so I decided to not block the voting and instead let the Community
>> decide whether this is a blocker for the release.
>>>> 
>>>> [1] https://github.com/apache/incubator-mxnet/issues/18600
>>>> 
>>>> Best regards,
>>>> Ciyong Chen
>>>> 
>>>> 
>>> 
>> 


Re: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-12 Thread Marco de Abreu
Are we in the position to make a release given that we have open license
issues with the ipmc and Apache board? I want to avoid giving the
impression that we are ignoring their requests - my current understanding
is that we are non compliant.

-Marco

On Sat, Jul 11, 2020, 9:46 AM Tong He  wrote:

> My +1 on the R binding.
>
> Tested with
>
> - Build from source
> - Install the R package and check it passed all tests.
>
> On 2020/07/10 18:31:27, Patrick Mu  wrote:
> > Hi Ciyong,
> >
> > I just discovered an issue with the 1.7, which causes the Yolo training
> with latest Gluon CV Yolo to fail.
> >
> > The PR that causes the failure is
> https://github.com/apache/incubator-mxnet/pull/18358, which modifies
> basic blocks of Gluon to fix a memory leak issue.
> >
> > Talked with Leonard, the author of the PR, and he said he found the root
> cause, but patching that PR would modifies those Gluon basic blocks
> further, which might be risky towards existing models and various customer
> models.
> >
> > So my 2-cents is reverting this PR in 1.7, and try patching the PR in
> 1.x and 2.0, meaning that the 1.7 won't have memory usage optimized by that
> feature.
> >
> > I'd like to hear what you think about this issue.
> >
> > Thanks,
> > Ziyi
> >
> >
> > On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote:
> > > Hi Community,
> > >
> > > I would like to call for action to test/validate/vote for the release
> candidate (1.7.0.rc0)
> > > As there's not any voting result during the scheduled time window, I
> would like to extend the time windows to July 13, 23:59:59 PST.
> > > Please prepare your time and provide feedback if you've tried with the
> pre-release code bases, thanks!
> > >
> > > Best regards,
> > > Ciyong
> > >
> > > -Original Message-
> > > From: Chen, Ciyong 
> > > Sent: Monday, July 6, 2020 10:48 PM
> > > To: d...@mxnet.apache.org
> > > Cc: Bob Paulin ; Henri Yandell ;
> Jason Dai ; Markus Weimer ;
> Michael Wall 
> > > Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> > >
> > > For the language bindings and windows platform, may I have your
> support to help verify these features? Thanks!
> > >
> > > @lanking520 to help verify the Scala/Java @gigasquid to help verify
> the Clojure
> > > @hetong007 to help verify the R
> > > @yajiedesign to help verify the windows platform
> > >
> > > Best regards,
> > > Ciyong Chen
> > >
> > > -Original Message-
> > > From: Chen, Ciyong 
> > > Sent: Monday, July 6, 2020 10:39 PM
> > > To: d...@mxnet.apache.org
> > > Cc: Bob Paulin ; Henri Yandell ;
> Jason Dai ; Markus Weimer ;
> Michael Wall 
> > > Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> > >
> > > Dear MXNet community,
> > >
> > > This is the vote to release Apache MXNet (incubating) version 1.7.0.
> Voting will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.
> > >
> > > Link to release notes:
> > > https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
> > >
> > > Link to release candidate:
> > > https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
> > >
> > > Link to source and signatures on apache dist server:
> > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
> > >
> > > Please remember to TEST first before voting accordingly:
> > > +1 = approve
> > > +0 = no opinion
> > > -1 = disapprove (provide reason)
> > >
> > > Additional notes:
> > >
> > >   *   There was an issue and discussion[1] regarding on a few numpy
> operators failed due to numpy 1.19.0 released on Jun 20, 2020, which exists
> in all branches (works with numpy <= 1.18.5). As numpy operator is still an
> experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0
> release, so I decided to not block the voting and instead let the Community
> decide whether this is a blocker for the release.
> > >
> > > [1] https://github.com/apache/incubator-mxnet/issues/18600
> > >
> > > Best regards,
> > > Ciyong Chen
> > >
> > >
> >
>


Re: I would like to access MxNet Slack

2020-07-11 Thread Chaitanya Bapat
Hello Leon,

I've sent you an invite to the Slach group.

Welcome to the MXNet Community.

Cheers,
Chai

On Fri, 10 Jul 2020 at 21:54, Leon Jian  wrote:

> Hi Sir,
>
> I would like to access MxNet Slack.
> leon.j...@papagoinc.com 
>
> Thank you.
>
> -Leon
>
>
>

-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-11 Thread Tong He
My +1 on the R binding.

Tested with

- Build from source
- Install the R package and check it passed all tests.

On 2020/07/10 18:31:27, Patrick Mu  wrote: 
> Hi Ciyong, 
> 
> I just discovered an issue with the 1.7, which causes the Yolo training with 
> latest Gluon CV Yolo to fail.
> 
> The PR that causes the failure is 
> https://github.com/apache/incubator-mxnet/pull/18358, which modifies  basic 
> blocks of Gluon to fix a memory leak issue.
> 
> Talked with Leonard, the author of the PR, and he said he found the root 
> cause, but patching that PR would modifies those Gluon basic blocks further, 
> which might be risky towards existing models and various customer models.
> 
> So my 2-cents is reverting this PR in 1.7, and try patching the PR in 1.x and 
> 2.0, meaning that the 1.7 won't have memory usage optimized by that feature.
> 
> I'd like to hear what you think about this issue.
> 
> Thanks,
> Ziyi
> 
> 
> On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote: 
> > Hi Community,
> > 
> > I would like to call for action to test/validate/vote for the release 
> > candidate (1.7.0.rc0)
> > As there's not any voting result during the scheduled time window, I would 
> > like to extend the time windows to July 13, 23:59:59 PST.
> > Please prepare your time and provide feedback if you've tried with the 
> > pre-release code bases, thanks!
> > 
> > Best regards,
> > Ciyong
> > 
> > -Original Message-
> > From: Chen, Ciyong  
> > Sent: Monday, July 6, 2020 10:48 PM
> > To: d...@mxnet.apache.org
> > Cc: Bob Paulin ; Henri Yandell ; Jason 
> > Dai ; Markus Weimer ; Michael Wall 
> > 
> > Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> > 
> > For the language bindings and windows platform, may I have your support to 
> > help verify these features? Thanks!
> > 
> > @lanking520 to help verify the Scala/Java @gigasquid to help verify the 
> > Clojure
> > @hetong007 to help verify the R
> > @yajiedesign to help verify the windows platform
> > 
> > Best regards,
> > Ciyong Chen
> > 
> > -Original Message-
> > From: Chen, Ciyong 
> > Sent: Monday, July 6, 2020 10:39 PM
> > To: d...@mxnet.apache.org
> > Cc: Bob Paulin ; Henri Yandell ; Jason 
> > Dai ; Markus Weimer ; Michael Wall 
> > 
> > Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> > 
> > Dear MXNet community,
> > 
> > This is the vote to release Apache MXNet (incubating) version 1.7.0. Voting 
> > will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.
> > 
> > Link to release notes:
> > https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
> > 
> > Link to release candidate:
> > https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
> > 
> > Link to source and signatures on apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
> > 
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> > 
> > Additional notes:
> > 
> >   *   There was an issue and discussion[1] regarding on a few numpy 
> > operators failed due to numpy 1.19.0 released on Jun 20, 2020, which exists 
> > in all branches (works with numpy <= 1.18.5). As numpy operator is still an 
> > experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0 
> > release, so I decided to not block the voting and instead let the Community 
> > decide whether this is a blocker for the release.
> > 
> > [1] https://github.com/apache/incubator-mxnet/issues/18600
> > 
> > Best regards,
> > Ciyong Chen
> > 
> > 
> 


Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-10 Thread Xingjian SHI
-1 (binding). 

It breaks the YOLO training in GluonCV and the basic image classification in 
AutoGluon. There is an open PR that reverts the `weakref` fix: 
https://github.com/apache/incubator-mxnet/pull/18692 and we need to see whether 
to revert that fix or find some other ways to solve the issue.

Given the current status, we should not release 1.7.0rc. Thus, I voted for -1.

Best,
Xingjian

ï»żOn 7/10/20, 9:46 PM, "Aston Zhang"  wrote:

+1

Tested:

mxnet 1.7.0rc0-cu101 passed d2l-en v0.14.0

On Fri, Jul 10, 2020 at 12:27 PM Qing Lan  wrote:

> My +1 (binding) on 1.7.0
>
> Tested:
>
>   *   Build from Source with static build instruction
>   *   Tested Scala pacakge and passed all tests
>
> Thanks,
> Qing
>
> 
> From: Tao Lv 
> Sent: Friday, July 10, 2020 0:03
> To: dev@mxnet.incubator.apache.org 
> Cc: d...@mxnet.apache.org ; Bob Paulin <
> b...@apache.org>; Henri Yandell ; Jason Dai <
> jason...@apache.org>; Markus Weimer ; Michael Wall <
> mjw...@apache.org>
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
>
> +1 (binding)
>
> I did:
> - Verify the key and signature;
> - Untar the source code package;
> - Build from source code with makefile, USE_BLAS=mkl, USE_MKLDNN=1;
> - Check mx.__version__;
> - Run benchmark_score.py under examples/image-classification.
>
> -tao
>
> On Fri, Jul 10, 2020 at 2:18 PM Chen, Ciyong 
> wrote:
>
> > Hi Community,
> >
> > I would like to call for action to test/validate/vote for the release
> > candidate (1.7.0.rc0)
> > As there's not any voting result during the scheduled time window, I
> would
> > like to extend the time windows to July 13, 23:59:59 PST.
> > Please prepare your time and provide feedback if you've tried with the
> > pre-release code bases, thanks!
> >
> > Best regards,
> > Ciyong
> >
> > -Original Message-
> > From: Chen, Ciyong 
> > Sent: Monday, July 6, 2020 10:48 PM
> > To: d...@mxnet.apache.org
> > Cc: Bob Paulin ; Henri Yandell ;
> Jason
> > Dai ; Markus Weimer ; Michael
> > Wall 
> > Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> >
> > For the language bindings and windows platform, may I have your support
> to
> > help verify these features? Thanks!
> >
> > @lanking520 to help verify the Scala/Java @gigasquid to help verify the
> > Clojure
> > @hetong007 to help verify the R
> > @yajiedesign to help verify the windows platform
> >
> > Best regards,
> > Ciyong Chen
> >
> > -Original Message-
> > From: Chen, Ciyong 
> > Sent: Monday, July 6, 2020 10:39 PM
> > To: d...@mxnet.apache.org
> > Cc: Bob Paulin ; Henri Yandell ;
> Jason
> > Dai ; Markus Weimer ; Michael
> > Wall 
> > Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> >
> > Dear MXNet community,
> >
> > This is the vote to release Apache MXNet (incubating) version 1.7.0.
> > Voting will start July 6, 23:59:59 PST and close on July 9, 23:59:59 
PST.
> >
> > Link to release notes:
> > https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
> >
> > Link to release candidate:
> > https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
> >
> > Link to source and signatures on apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> > Additional notes:
> >
> >   *   There was an issue and discussion[1] regarding on a few numpy
> > operators failed due to numpy 1.19.0 released on Jun 20, 2020, which
> exists
> > in all branches (works with numpy <= 1.18.5). As numpy operator is still
> an
> > experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0
> > release, so I decided to not block the voting and instead let the
> Community
> > decide whether this is a blocker for the release.
> >
> > [1] https://github.com/apache/incubator-mxnet/issues/18600
> >
> > Best regards,
> > Ciyong Chen
> >
> >
>



Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-10 Thread Aston Zhang
+1

Tested:

mxnet 1.7.0rc0-cu101 passed d2l-en v0.14.0

On Fri, Jul 10, 2020 at 12:27 PM Qing Lan  wrote:

> My +1 (binding) on 1.7.0
>
> Tested:
>
>   *   Build from Source with static build instruction
>   *   Tested Scala pacakge and passed all tests
>
> Thanks,
> Qing
>
> 
> From: Tao Lv 
> Sent: Friday, July 10, 2020 0:03
> To: dev@mxnet.incubator.apache.org 
> Cc: d...@mxnet.apache.org ; Bob Paulin <
> b...@apache.org>; Henri Yandell ; Jason Dai <
> jason...@apache.org>; Markus Weimer ; Michael Wall <
> mjw...@apache.org>
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
>
> +1 (binding)
>
> I did:
> - Verify the key and signature;
> - Untar the source code package;
> - Build from source code with makefile, USE_BLAS=mkl, USE_MKLDNN=1;
> - Check mx.__version__;
> - Run benchmark_score.py under examples/image-classification.
>
> -tao
>
> On Fri, Jul 10, 2020 at 2:18 PM Chen, Ciyong 
> wrote:
>
> > Hi Community,
> >
> > I would like to call for action to test/validate/vote for the release
> > candidate (1.7.0.rc0)
> > As there's not any voting result during the scheduled time window, I
> would
> > like to extend the time windows to July 13, 23:59:59 PST.
> > Please prepare your time and provide feedback if you've tried with the
> > pre-release code bases, thanks!
> >
> > Best regards,
> > Ciyong
> >
> > -Original Message-
> > From: Chen, Ciyong 
> > Sent: Monday, July 6, 2020 10:48 PM
> > To: d...@mxnet.apache.org
> > Cc: Bob Paulin ; Henri Yandell ;
> Jason
> > Dai ; Markus Weimer ; Michael
> > Wall 
> > Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> >
> > For the language bindings and windows platform, may I have your support
> to
> > help verify these features? Thanks!
> >
> > @lanking520 to help verify the Scala/Java @gigasquid to help verify the
> > Clojure
> > @hetong007 to help verify the R
> > @yajiedesign to help verify the windows platform
> >
> > Best regards,
> > Ciyong Chen
> >
> > -Original Message-
> > From: Chen, Ciyong 
> > Sent: Monday, July 6, 2020 10:39 PM
> > To: d...@mxnet.apache.org
> > Cc: Bob Paulin ; Henri Yandell ;
> Jason
> > Dai ; Markus Weimer ; Michael
> > Wall 
> > Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> >
> > Dear MXNet community,
> >
> > This is the vote to release Apache MXNet (incubating) version 1.7.0.
> > Voting will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.
> >
> > Link to release notes:
> > https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
> >
> > Link to release candidate:
> > https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
> >
> > Link to source and signatures on apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> > Additional notes:
> >
> >   *   There was an issue and discussion[1] regarding on a few numpy
> > operators failed due to numpy 1.19.0 released on Jun 20, 2020, which
> exists
> > in all branches (works with numpy <= 1.18.5). As numpy operator is still
> an
> > experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0
> > release, so I decided to not block the voting and instead let the
> Community
> > decide whether this is a blocker for the release.
> >
> > [1] https://github.com/apache/incubator-mxnet/issues/18600
> >
> > Best regards,
> > Ciyong Chen
> >
> >
>


Re: [apache/incubator-mxnet] [RFC] MXNet website improvements (#17982)

2020-07-10 Thread Chaitanya Prakash Bapat
https://github.com/apache/incubator-mxnet/issues/18693
- Clipped table of contents
- Missing copy button

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17982#issuecomment-656950607

Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-10 Thread Xingjian SHI
Thanks Ziyi,

I've discovered the same issue when I'm trying to use AutoGluon with 1.7.0rc0 
and would like to share my finding:

Basically, I don't think Gluon Block is designed to be pickleble. But pickling 
do work for some cases in the old version:

I've included two cases in the gist 
(https://gist.github.com/sxjscience/944066c82e566f1b89b01fa226678890).

- Case1: we construct a gluon block, hybridize it and feed one NDArray to help 
initialize the block. After that, it will no longer be pickleble. 
- Case2: we just construct a gluon block and it will be pickleble in 1.6.0, but 
won't be pickleble in 1.7.0.

Thus, the real issue is: Should we supporting pickling a Gluon Block? If not, 
should we support combining multiprocessing.pool with the Gluon Block? For 
reference, PyTorch supports pickling the nn.Module as shown in: 
https://gist.github.com/sxjscience/90b812a66d445e759c55eedc3ef93668 and also in 
the doc (https://pytorch.org/tutorials/beginner/saving_loading_models.html). 

Best,
Xingjian


ï»żOn 7/10/20, 11:31 AM, "Patrick Mu"  wrote:

Hi Ciyong, 

I just discovered an issue with the 1.7, which causes the Yolo training 
with latest Gluon CV Yolo to fail.

The PR that causes the failure is 
https://github.com/apache/incubator-mxnet/pull/18358, which modifies  basic 
blocks of Gluon to fix a memory leak issue.

Talked with Leonard, the author of the PR, and he said he found the root 
cause, but patching that PR would modifies those Gluon basic blocks further, 
which might be risky towards existing models and various customer models.

So my 2-cents is reverting this PR in 1.7, and try patching the PR in 1.x 
and 2.0, meaning that the 1.7 won't have memory usage optimized by that feature.

I'd like to hear what you think about this issue.

Thanks,
Ziyi


On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote: 
> Hi Community,
> 
> I would like to call for action to test/validate/vote for the release 
candidate (1.7.0.rc0)
> As there's not any voting result during the scheduled time window, I 
would like to extend the time windows to July 13, 23:59:59 PST.
> Please prepare your time and provide feedback if you've tried with the 
pre-release code bases, thanks!
> 
> Best regards,
> Ciyong
> 
> -Original Message-
> From: Chen, Ciyong  
> Sent: Monday, July 6, 2020 10:48 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; 
Jason Dai ; Markus Weimer ; Michael 
Wall 
> Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> 
> For the language bindings and windows platform, may I have your support 
to help verify these features? Thanks!
> 
> @lanking520 to help verify the Scala/Java @gigasquid to help verify the 
Clojure
> @hetong007 to help verify the R
> @yajiedesign to help verify the windows platform
> 
> Best regards,
> Ciyong Chen
> 
> -Original Message-
> From: Chen, Ciyong 
> Sent: Monday, July 6, 2020 10:39 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; 
Jason Dai ; Markus Weimer ; Michael 
Wall 
> Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> 
> Dear MXNet community,
> 
> This is the vote to release Apache MXNet (incubating) version 1.7.0. 
Voting will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.
> 
> Link to release notes:
> https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
> 
> Link to release candidate:
> https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
> 
> Link to source and signatures on apache dist server:
> 
https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
> 
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
> 
> Additional notes:
> 
>   *   There was an issue and discussion[1] regarding on a few numpy 
operators failed due to numpy 1.19.0 released on Jun 20, 2020, which exists in 
all branches (works with numpy <= 1.18.5). As numpy operator is still an 
experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0 
release, so I decided to not block the voting and instead let the Community 
decide whether this is a blocker for the release.
> 
> [1] https://github.com/apache/incubator-mxnet/issues/18600
> 
> Best regards,
> Ciyong Chen
> 
> 



Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-10 Thread Qing Lan
My +1 (binding) on 1.7.0

Tested:

  *   Build from Source with static build instruction
  *   Tested Scala pacakge and passed all tests

Thanks,
Qing


From: Tao Lv 
Sent: Friday, July 10, 2020 0:03
To: dev@mxnet.incubator.apache.org 
Cc: d...@mxnet.apache.org ; Bob Paulin 
; Henri Yandell ; Jason Dai 
; Markus Weimer ; Michael Wall 

Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

+1 (binding)

I did:
- Verify the key and signature;
- Untar the source code package;
- Build from source code with makefile, USE_BLAS=mkl, USE_MKLDNN=1;
- Check mx.__version__;
- Run benchmark_score.py under examples/image-classification.

-tao

On Fri, Jul 10, 2020 at 2:18 PM Chen, Ciyong  wrote:

> Hi Community,
>
> I would like to call for action to test/validate/vote for the release
> candidate (1.7.0.rc0)
> As there's not any voting result during the scheduled time window, I would
> like to extend the time windows to July 13, 23:59:59 PST.
> Please prepare your time and provide feedback if you've tried with the
> pre-release code bases, thanks!
>
> Best regards,
> Ciyong
>
> -Original Message-
> From: Chen, Ciyong 
> Sent: Monday, July 6, 2020 10:48 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; Jason
> Dai ; Markus Weimer ; Michael
> Wall 
> Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
>
> For the language bindings and windows platform, may I have your support to
> help verify these features? Thanks!
>
> @lanking520 to help verify the Scala/Java @gigasquid to help verify the
> Clojure
> @hetong007 to help verify the R
> @yajiedesign to help verify the windows platform
>
> Best regards,
> Ciyong Chen
>
> -Original Message-
> From: Chen, Ciyong 
> Sent: Monday, July 6, 2020 10:39 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; Jason
> Dai ; Markus Weimer ; Michael
> Wall 
> Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
>
> Dear MXNet community,
>
> This is the vote to release Apache MXNet (incubating) version 1.7.0.
> Voting will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.
>
> Link to release notes:
> https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
>
> Link to release candidate:
> https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
>
> Link to source and signatures on apache dist server:
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
>
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
>
> Additional notes:
>
>   *   There was an issue and discussion[1] regarding on a few numpy
> operators failed due to numpy 1.19.0 released on Jun 20, 2020, which exists
> in all branches (works with numpy <= 1.18.5). As numpy operator is still an
> experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0
> release, so I decided to not block the voting and instead let the Community
> decide whether this is a blocker for the release.
>
> [1] https://github.com/apache/incubator-mxnet/issues/18600
>
> Best regards,
> Ciyong Chen
>
>


Re: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-10 Thread Patrick Mu
Hi Ciyong, 

I just discovered an issue with the 1.7, which causes the Yolo training with 
latest Gluon CV Yolo to fail.

The PR that causes the failure is 
https://github.com/apache/incubator-mxnet/pull/18358, which modifies  basic 
blocks of Gluon to fix a memory leak issue.

Talked with Leonard, the author of the PR, and he said he found the root cause, 
but patching that PR would modifies those Gluon basic blocks further, which 
might be risky towards existing models and various customer models.

So my 2-cents is reverting this PR in 1.7, and try patching the PR in 1.x and 
2.0, meaning that the 1.7 won't have memory usage optimized by that feature.

I'd like to hear what you think about this issue.

Thanks,
Ziyi


On 2020/07/10 06:18:02, "Chen, Ciyong"  wrote: 
> Hi Community,
> 
> I would like to call for action to test/validate/vote for the release 
> candidate (1.7.0.rc0)
> As there's not any voting result during the scheduled time window, I would 
> like to extend the time windows to July 13, 23:59:59 PST.
> Please prepare your time and provide feedback if you've tried with the 
> pre-release code bases, thanks!
> 
> Best regards,
> Ciyong
> 
> -Original Message-
> From: Chen, Ciyong  
> Sent: Monday, July 6, 2020 10:48 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; Jason 
> Dai ; Markus Weimer ; Michael Wall 
> 
> Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> 
> For the language bindings and windows platform, may I have your support to 
> help verify these features? Thanks!
> 
> @lanking520 to help verify the Scala/Java @gigasquid to help verify the 
> Clojure
> @hetong007 to help verify the R
> @yajiedesign to help verify the windows platform
> 
> Best regards,
> Ciyong Chen
> 
> -Original Message-
> From: Chen, Ciyong 
> Sent: Monday, July 6, 2020 10:39 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; Jason 
> Dai ; Markus Weimer ; Michael Wall 
> 
> Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
> 
> Dear MXNet community,
> 
> This is the vote to release Apache MXNet (incubating) version 1.7.0. Voting 
> will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.
> 
> Link to release notes:
> https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
> 
> Link to release candidate:
> https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
> 
> Link to source and signatures on apache dist server:
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
> 
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
> 
> Additional notes:
> 
>   *   There was an issue and discussion[1] regarding on a few numpy operators 
> failed due to numpy 1.19.0 released on Jun 20, 2020, which exists in all 
> branches (works with numpy <= 1.18.5). As numpy operator is still an 
> experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0 
> release, so I decided to not block the voting and instead let the Community 
> decide whether this is a blocker for the release.
> 
> [1] https://github.com/apache/incubator-mxnet/issues/18600
> 
> Best regards,
> Ciyong Chen
> 
> 


Re: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-10 Thread Tao Lv
+1 (binding)

I did:
- Verify the key and signature;
- Untar the source code package;
- Build from source code with makefile, USE_BLAS=mkl, USE_MKLDNN=1;
- Check mx.__version__;
- Run benchmark_score.py under examples/image-classification.

-tao

On Fri, Jul 10, 2020 at 2:18 PM Chen, Ciyong  wrote:

> Hi Community,
>
> I would like to call for action to test/validate/vote for the release
> candidate (1.7.0.rc0)
> As there's not any voting result during the scheduled time window, I would
> like to extend the time windows to July 13, 23:59:59 PST.
> Please prepare your time and provide feedback if you've tried with the
> pre-release code bases, thanks!
>
> Best regards,
> Ciyong
>
> -Original Message-
> From: Chen, Ciyong 
> Sent: Monday, July 6, 2020 10:48 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; Jason
> Dai ; Markus Weimer ; Michael
> Wall 
> Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
>
> For the language bindings and windows platform, may I have your support to
> help verify these features? Thanks!
>
> @lanking520 to help verify the Scala/Java @gigasquid to help verify the
> Clojure
> @hetong007 to help verify the R
> @yajiedesign to help verify the windows platform
>
> Best regards,
> Ciyong Chen
>
> -Original Message-
> From: Chen, Ciyong 
> Sent: Monday, July 6, 2020 10:39 PM
> To: d...@mxnet.apache.org
> Cc: Bob Paulin ; Henri Yandell ; Jason
> Dai ; Markus Weimer ; Michael
> Wall 
> Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0
>
> Dear MXNet community,
>
> This is the vote to release Apache MXNet (incubating) version 1.7.0.
> Voting will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.
>
> Link to release notes:
> https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes
>
> Link to release candidate:
> https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0
>
> Link to source and signatures on apache dist server:
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>
>
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
>
> Additional notes:
>
>   *   There was an issue and discussion[1] regarding on a few numpy
> operators failed due to numpy 1.19.0 released on Jun 20, 2020, which exists
> in all branches (works with numpy <= 1.18.5). As numpy operator is still an
> experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0
> release, so I decided to not block the voting and instead let the Community
> decide whether this is a blocker for the release.
>
> [1] https://github.com/apache/incubator-mxnet/issues/18600
>
> Best regards,
> Ciyong Chen
>
>


RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-09 Thread Chen, Ciyong
Hi Community,

I would like to call for action to test/validate/vote for the release candidate 
(1.7.0.rc0)
As there's not any voting result during the scheduled time window, I would like 
to extend the time windows to July 13, 23:59:59 PST.
Please prepare your time and provide feedback if you've tried with the 
pre-release code bases, thanks!

Best regards,
Ciyong

-Original Message-
From: Chen, Ciyong  
Sent: Monday, July 6, 2020 10:48 PM
To: d...@mxnet.apache.org
Cc: Bob Paulin ; Henri Yandell ; Jason Dai 
; Markus Weimer ; Michael Wall 

Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

For the language bindings and windows platform, may I have your support to help 
verify these features? Thanks!

@lanking520 to help verify the Scala/Java @gigasquid to help verify the Clojure
@hetong007 to help verify the R
@yajiedesign to help verify the windows platform

Best regards,
Ciyong Chen

-Original Message-
From: Chen, Ciyong 
Sent: Monday, July 6, 2020 10:39 PM
To: d...@mxnet.apache.org
Cc: Bob Paulin ; Henri Yandell ; Jason Dai 
; Markus Weimer ; Michael Wall 

Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

Dear MXNet community,

This is the vote to release Apache MXNet (incubating) version 1.7.0. Voting 
will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.

Link to release notes:
https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes

Link to release candidate:
https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0

Link to source and signatures on apache dist server:
https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0<https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0/>

Please remember to TEST first before voting accordingly:
+1 = approve
+0 = no opinion
-1 = disapprove (provide reason)

Additional notes:

  *   There was an issue and discussion[1] regarding on a few numpy operators 
failed due to numpy 1.19.0 released on Jun 20, 2020, which exists in all 
branches (works with numpy <= 1.18.5). As numpy operator is still an 
experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0 
release, so I decided to not block the voting and instead let the Community 
decide whether this is a blocker for the release.

[1] https://github.com/apache/incubator-mxnet/issues/18600

Best regards,
Ciyong Chen



Re: Access to MXNet Slack Channel for joshr-mx...@joshr.com

2020-07-08 Thread Sheng Zha
Invite sent. Welcome!

-sz

On 2020/07/08 13:13:58, Josh Rabinowitz  wrote: 
> Hello,
> 
> Can you please allow access for the email
> 
>joshr-mx...@joshr.com
>(which is my email address, but not the email address this email comes
> from)
> 
> to the MXNet Slack Group?
> 
> Thank you
> 
> Josh
> 


Re: Joining on Slack

2020-07-08 Thread Sheng Zha
Invite sent. Welcome!

-sz

On 2020/07/05 05:35:54, Veesh Goldman  wrote: 
> Hi, I have interest in getting involved in the MXNet community. Could I get
> an invite for slack?
> 


Re: Ownership of discuss.mxnet.io

2020-07-07 Thread Michael Wall
Ticket created, see https://issues.apache.org/jira/browse/INFRA-20493

On Sun, Jul 5, 2020 at 12:35 PM Michael Wall  wrote:
>
> Ok, thanks Sheng.  I will get a ticket created.
>
> Looks like there are releases hosted on dist.mxnet.io and
> repo.mxnet.io as well?  Are we looking for INFRA to host those as
> well?
>
> I see discuss.mxnet.io is now being archived to
> discuss-arch...@mxnet.apache.org.  Is there a similar plan in the
> works for discuss.gluon.ai?
>
> Makes sense on the brand management review, I will hold off until we
> hear from INFRA.
>
> Mike
>
> On Sun, Jul 5, 2020 at 12:31 PM Sheng Zha  wrote:
> >
> > Hi Michael,
> >
> > Thank you for offering to help. Yes, that’s the correct understanding.
> >
> > For brand management, we still haven’t started the review yet. On the other 
> > hand, if the hosting is on Apache infra and PPMC manages the site, then the 
> > site will no longer be owned by a third party.
> >
> > Thanks,
> > Sheng
> >
> > > On Jul 5, 2020, at 9:25 AM, Michael Wall  wrote:
> > >
> > > ï»żDid a ticket ever get opened for this?  I can open one, but want to
> > > verify my understanding:
> > >
> > > "We want to explore the possibility of hosting discuss.mxnet.io and
> > > discuss.gluon.ai on Apache infrastructure and hook into Apache user
> > > management."
> > >
> > > Shane also asked if we started a conversation with Brand management.
> > > How can I help there?
> > >
> > > Mike
> > >
> > >> On Thu, Jun 18, 2020 at 6:03 AM Marco de Abreu  
> > >> wrote:
> > >>
> > >> I think we can start by opening a ticket with infra referring to this
> > >> thread.
> > >>
> > >> -Marco
> > >>
> > >> Sheng Zha  schrieb am Do., 18. Juni 2020, 09:45:
> > >>
> > >>> Hi Shane,
> > >>>
> > >>> Thanks for the comment. With my apache hat on, I definitely agree that 
> > >>> it
> > >>> would be better to have such forum on Apache managed infra. I’ve started
> > >>> with discussion archive to ensure that everything is recorded, and I 
> > >>> think
> > >>> I can quickly try to make the backups of that forum accessible to the 
> > >>> PPMC
> > >>> and Apache infra.
> > >>>
> > >>> I think the best way to achieve the goal of continuity, eventually, 
> > >>> would
> > >>> be to host it on Apache infrastructure and have its hosting funded by 
> > >>> ASF.
> > >>> How do we go about requesting this?
> > >>>
> > >>> As for the trademark review, we started working on reviewing the binary
> > >>> distribution usage and are currently focusing on the more pressing issue
> > >>> there. We will get to the domain name usage afterwards.
> > >>>
> > >>> -sz
> > >>>
> >  On Jun 17, 2020, at 6:07 AM, Shane Curcuru  
> >  wrote:
> > 
> >  ï»żOn 2020/06/12 22:39:10, Marco de Abreu 
> > >>> wrote: ...
> > > The mxnet.io domain is not under control of the ASF. If we say that a
> > >>> user
> > > facing platform is managed and endorsed by the PPMC, we should make it
> > > available under the ASF domain system to further represent the 
> > > official
> > > affiliation. Also, it removes a weak link in the chain - e.g. the case
> > >>> when
> > > the mxnet.io domain ran out and broke quite a bit of stuff. Since the
> > >>> ASF
> > > offers subdomains for projects, I don't see any reason why not to use
> > >>> them.
> > 
> >  This is the core issue for me, from the larger Apache perspective.
> >  Domain names using Apache marks - especially when they have interactive
> >  traffic about the Apache project itself - work best when the domain 
> >  name
> >  itself is owned by the ASF.  That ensures continuity for the future, in
> >  the case where a party (which includes individual PMC members) stops
> >  contributing to the project.  When the ASF Infra team has direct admin
> >  access to a service or domain, we know that the ASF will be able to
> >  manage it for the future.
> > 
> >  Also, has the PPMC worked with Apache Brand Management on use of
> >  trademarks in domain names?
> > 
> >  https://apache.org/foundation/marks/domains
> > 
> >  --
> > 
> >  - Shane
> >  Director & Member
> >  The Apache Software Foundation
> > >>>


RE: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

2020-07-06 Thread Chen, Ciyong
For the language bindings and windows platform, may I have your support to help 
verify these features? Thanks!

@lanking520 to help verify the Scala/Java
@gigasquid to help verify the Clojure
@hetong007 to help verify the R
@yajiedesign to help verify the windows platform

Best regards,
Ciyong Chen

-Original Message-
From: Chen, Ciyong  
Sent: Monday, July 6, 2020 10:39 PM
To: d...@mxnet.apache.org
Cc: Bob Paulin ; Henri Yandell ; Jason Dai 
; Markus Weimer ; Michael Wall 

Subject: [VOTE] Release Apache MXNet (incubating) version 1.7.0.rc0

Dear MXNet community,

This is the vote to release Apache MXNet (incubating) version 1.7.0. Voting 
will start July 6, 23:59:59 PST and close on July 9, 23:59:59 PST.

Link to release notes:
https://cwiki.apache.org/confluence/display/MXNET/1.7.0+Release+notes

Link to release candidate:
https://github.com/apache/incubator-mxnet/releases/tag/1.7.0.rc0

Link to source and signatures on apache dist server:
https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.7.0.rc0

Please remember to TEST first before voting accordingly:
+1 = approve
+0 = no opinion
-1 = disapprove (provide reason)

Additional notes:

  *   There was an issue and discussion[1] regarding on a few numpy operators 
failed due to numpy 1.19.0 released on Jun 20, 2020, which exists in all 
branches (works with numpy <= 1.18.5). As numpy operator is still an 
experimental feature in 1.7.0 release and mainly targeting in MXNet 2.0 
release, so I decided to not block the voting and instead let the Community 
decide whether this is a blocker for the release.

[1] https://github.com/apache/incubator-mxnet/issues/18600

Best regards,
Ciyong Chen



RE: Updates for 1.7.0 minor release

2020-07-06 Thread Chen, Ciyong
Hi dev,

Thanks everyone for your great support to backport the necessary fixes into 
1.7.x and identify & remove the (potential) block issue.
Today we've tagged the 1.7.0.rc0 for the upcoming 1.7.0 release, thanks for the 
help from @Tao.

The artifacts will be uploaded later, and we'll move forward with the rest of 
release process.
Again, thanks for your patience.

Thanks,
-Ciyong

-Original Message-
From: sandeep krishnamurthy  
Sent: Tuesday, June 30, 2020 11:07 AM
To: dev@mxnet.incubator.apache.org
Subject: Re: Updates for 1.7.0 minor release

I agree with marking numpy operators being marked as experimental and going 
with v1.7 given numpy is still in progress and mainly targeted beginning v2.0. 
And, v1.7 has several significant features such as accelerator APIs.

On Mon, 29 Jun 2020, 7:51 pm Chen, Ciyong,  wrote:

> Hi Chai,
>
> We've finalized the multiple license header issue and merged the 
> necessary modification according to the dev@ discussion result.
> But @Leonard reported a numpy version issue in [1], which is about the 
> UT failure of numpy operators, as well as some other numpy issue in [2].
> Which is under discussion so far.
>
> @dev
> As the numpy operator is still in active development, there could be 
> more defects/bugs as including more new functionalities/features in 
> v1.7. Thus it's uncertain about how longer it will take to backport 
> these numpy bug fixes/features from master to v1.7, I suggest to mark 
> numpy operator as experimental feature in v1.7 release, and decide a 
> cut off day (24h or 48h) to include the fixes that are available, and 
> moving the 1.7 release process forward, what do you think?
>
> Thanks,
> -Ciyong
> [1]
> https://github.com/apache/incubator-mxnet/issues/18600#issuecomment-64
> 9712182 [2] https://github.com/apache/incubator-mxnet/issues/18641
>
> -Original Message-
> From: Chaitanya Bapat 
> Sent: Tuesday, June 30, 2020 1:45 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: Updates for 1.7.0 minor release
>
> Hey Ciyong,
>
> Any update from the ASF mentors/legal team re: multiple license header 
> issue?
> I can see the PR for checking Valid license header merged:
> https://github.com/apache/incubator-mxnet/pull/18478
> So if we get the multiple license header issue fixed, we can get 1.7.0 
> release going..
>
> Are we blocked somewhere?
> Thanks
> Chai
>
>
> On Sat, 13 Jun 2020 at 06:32, Chen, Ciyong  wrote:
>
> > Hi Leonard,
> >
> > Thanks for your confirmation on the build issue. As it's not a 
> > blocker for
> > 1.7 release now, then we can consider to backport the fix to 1.7.x 
> > branch when it's ready.
> > The only remaining item is how to deal with the multiple license 
> > header now, thank you for helping on this😊
> >
> > Thanks,
> > -Ciyong
> >
> > -Original Message-
> > From: Leonard Lausen 
> > Sent: Saturday, June 13, 2020 1:10 AM
> > To: dev@mxnet.incubator.apache.org
> > Subject: Re: Updates for 1.7.0 minor release
> >
> > Thank you Ciyong. After further investigation, the build issue is 
> > not as severe as initially claimed on Github. I checked the 
> > high-water memory usage during single-process build: It's 2.7GB on 
> > master. On 1.7 release, high-level usage is 2.2GB. This is much more 
> > acceptable than the previously claimed >16GB usage and thus not a 
> > blocking issue from my perspective. I'll later also report the numbers for 
> > 1.5 and 1.6.
> >
> > Fixing the respective implementations to be more compiler-friendly 
> > would still be good.
> >
> > Looking at the parallel-build high-level memory usage on a 96 core 
> > machine, I saw a 45% memory usage increase during build from 1.5 to 1.7.
> >
> > Best regards
> > Leonard
> >
> >
> > On Fri, 2020-06-12 at 02:09 +, Chen, Ciyong wrote:
> > > Hi Chai,
> > >
> > > Sorry for the late update.
> > >
> > > Recently, several bug fixes [4] including numpy operator/batchnorm 
> > > gradient/LSTM CPU gradient/CI/CD/license issues were back-ported 
> > > into
> > v1.7.x.
> > > So far, there's one build issue and two license issues being tracked.
> > > 1) build issue #18501 (It costs over 16GB memory to 
> > > compile indexing_op.o), which @leezu stated it's a blocker for the 
> > > release[1].
> > > 2) license issue: multiple license header issue[2] is 
> > > under discussion; no valid apache license header issue[3] is 
> > > identified, and I'm working on the PR as

Re: assimilation of mshadow into the MXNet codebase

2020-07-05 Thread Sheng Zha
I found the template in the link Marco provided and filed the software
grant to the secretary.

Sheng

On Sun, Jul 5, 2020 at 10:09 AM Michael Wall  wrote:

> Yes, to secretary@.  Do you need a template?
>
> Thanks Sheng
>
> Mike
>
> On Sun, Jul 5, 2020 at 12:59 PM Sheng Zha  wrote:
> >
> > Hi Michael,
> >
> > Thanks for offering help. I can represent the code donors and file the
> software grant. Should the filing go to secretary@?
> >
> > Sheng
> >
> > > On Jul 5, 2020, at 9:50 AM, Michael Wall  wrote:
> > >
> > > ï»żIs this being tracked in a ticket anywhere?  What help can I offer?
> > >
> > > Mike
> > >
> > >> On Fri, Jun 12, 2020 at 6:44 PM Marco de Abreu <
> marco.g.ab...@gmail.com> wrote:
> > >>
> > >> Hi Sheng,
> > >>
> > >> since this is a "large one off code contribution", the policy [1]
> states
> > >> that they should be brought in through a software grant.
> > >>
> > >> Best regards,
> > >> Marco
> > >>
> > >> [1]: https://www.apache.org/foundation/how-it-works/legal.html
> > >>
> > >>> On Fri, Jun 12, 2020 at 11:41 PM Sheng Zha 
> wrote:
> > >>>
> > >>> To mentors,
> > >>>
> > >>> Do we the PPMC need to fill out IP clearance for this code donation?
> > >>>
> > >>> -sz
> > >>>
> > >>> On 2019/04/24 21:19:49, Sheng Zha  wrote:
> >  The community has agreed to donate mshadow to the mxnet code base. I
> > >>> will start the migration and build logic changes soon.
> > 
> >  -sz
> > 
> >  On 2019/04/07 21:47:39, Sheng Zha  wrote:
> > > I agree it would make development easier to donate mshadow to mxnet
> > >>> code base, since mshadow is only used in MXNet. I support donating
> the
> > >>> mshadow code to mxnet and I started an RFC for this in mshadow [1].
> > >
> > > [1] https://github.com/dmlc/mshadow/issues/373
> > >
> > > -sz
> > >
> > > On 2019/04/06 04:38:19, Tianqi Chen 
> wrote:
> > >> Technically, mshadow is sufficient for MXNet. Adopting other
> > >>> libraries (
> > >> eigen or xtensor) will unnecessarily increase the codebase
> complexity
> > >> without any additional gains.
> > >>
> > >> Given that mshadow is only used by mxnet. I do support donating it
> > >>> into
> > >> mxnet codebase.
> > >> To respect the original mshadow community. I would recommend
> > >>> starting a
> > >> community RFC In the mshadow github issue for a week, before we
> > >>> start the
> > >> migrating process.
> > >> Also, I would recommend a rebase merge just like the case of
> > >>> MXNet.jl code
> > >> base to preserve the contribution history.
> > >>
> > >> Tianqi
> > >>
> > >>
> > >> On Fri, Apr 5, 2019 at 9:25 PM Alfredo Luque
> > >>  wrote:
> > >>
> > >>> Do you have a link to both of these proposals?
> > >>>
> > >>> On Fri, Apr 5, 2019 at 20:14 Anirudh Acharya <
> > >>> anirudhk...@gmail.com>
> > >>> wrote:
> > >>>
> >  Hi Pedro,
> > 
> >  mshadow is mostly used for tensor arithmetic. There have been
> > >>> discussions
> >  about including it within mxnet. I think it is a good idea.
> > 
> >  As a more long term solution using libraries like eigen to
> > >>> perform linear
> >  algebra operations was also suggested by anirudh2290@. I think
> > >>> xtensor(
> >  https://github.com/QuantStack/xtensor ) can also be a candidate
> > >>> here.
> > 
> >  -
> >  Anirudh
> > 
> > 
> >  On Fri, Apr 5, 2019 at 7:03 PM Pedro Larroy <
> > >>> pedro.larroy.li...@gmail.com>
> >  wrote:
> > 
> > > Hi
> > >
> > > Some developers have noticed that working in mshadow is
> > >>> cumbersome as
> > > it's a 3rdparty subrepo.
> > >
> > > Since mshadow is a bunch of headers which don't have much of
> > > independent tests / library functionality, me and other
> > >>> developers
> > > believe that it would be good to assimilate this code in the
> > > repository for ease of contribution and changes without having
> > >>> to go
> > > trough contortions to test PRs that modify mshadow.
> > >
> > > Would anybody oppose this change?
> > >
> > > Thanks and have a nice weekend.
> > >
> > > Pedro.
> > >
> > 
> > >>>
> > >>
> > >
> > 
> > >>>
>


Re: assimilation of mshadow into the MXNet codebase

2020-07-05 Thread Michael Wall
Yes, to secretary@.  Do you need a template?

Thanks Sheng

Mike

On Sun, Jul 5, 2020 at 12:59 PM Sheng Zha  wrote:
>
> Hi Michael,
>
> Thanks for offering help. I can represent the code donors and file the 
> software grant. Should the filing go to secretary@?
>
> Sheng
>
> > On Jul 5, 2020, at 9:50 AM, Michael Wall  wrote:
> >
> > ï»żIs this being tracked in a ticket anywhere?  What help can I offer?
> >
> > Mike
> >
> >> On Fri, Jun 12, 2020 at 6:44 PM Marco de Abreu  
> >> wrote:
> >>
> >> Hi Sheng,
> >>
> >> since this is a "large one off code contribution", the policy [1] states
> >> that they should be brought in through a software grant.
> >>
> >> Best regards,
> >> Marco
> >>
> >> [1]: https://www.apache.org/foundation/how-it-works/legal.html
> >>
> >>> On Fri, Jun 12, 2020 at 11:41 PM Sheng Zha  wrote:
> >>>
> >>> To mentors,
> >>>
> >>> Do we the PPMC need to fill out IP clearance for this code donation?
> >>>
> >>> -sz
> >>>
> >>> On 2019/04/24 21:19:49, Sheng Zha  wrote:
>  The community has agreed to donate mshadow to the mxnet code base. I
> >>> will start the migration and build logic changes soon.
> 
>  -sz
> 
>  On 2019/04/07 21:47:39, Sheng Zha  wrote:
> > I agree it would make development easier to donate mshadow to mxnet
> >>> code base, since mshadow is only used in MXNet. I support donating the
> >>> mshadow code to mxnet and I started an RFC for this in mshadow [1].
> >
> > [1] https://github.com/dmlc/mshadow/issues/373
> >
> > -sz
> >
> > On 2019/04/06 04:38:19, Tianqi Chen  wrote:
> >> Technically, mshadow is sufficient for MXNet. Adopting other
> >>> libraries (
> >> eigen or xtensor) will unnecessarily increase the codebase complexity
> >> without any additional gains.
> >>
> >> Given that mshadow is only used by mxnet. I do support donating it
> >>> into
> >> mxnet codebase.
> >> To respect the original mshadow community. I would recommend
> >>> starting a
> >> community RFC In the mshadow github issue for a week, before we
> >>> start the
> >> migrating process.
> >> Also, I would recommend a rebase merge just like the case of
> >>> MXNet.jl code
> >> base to preserve the contribution history.
> >>
> >> Tianqi
> >>
> >>
> >> On Fri, Apr 5, 2019 at 9:25 PM Alfredo Luque
> >>  wrote:
> >>
> >>> Do you have a link to both of these proposals?
> >>>
> >>> On Fri, Apr 5, 2019 at 20:14 Anirudh Acharya <
> >>> anirudhk...@gmail.com>
> >>> wrote:
> >>>
>  Hi Pedro,
> 
>  mshadow is mostly used for tensor arithmetic. There have been
> >>> discussions
>  about including it within mxnet. I think it is a good idea.
> 
>  As a more long term solution using libraries like eigen to
> >>> perform linear
>  algebra operations was also suggested by anirudh2290@. I think
> >>> xtensor(
>  https://github.com/QuantStack/xtensor ) can also be a candidate
> >>> here.
> 
>  -
>  Anirudh
> 
> 
>  On Fri, Apr 5, 2019 at 7:03 PM Pedro Larroy <
> >>> pedro.larroy.li...@gmail.com>
>  wrote:
> 
> > Hi
> >
> > Some developers have noticed that working in mshadow is
> >>> cumbersome as
> > it's a 3rdparty subrepo.
> >
> > Since mshadow is a bunch of headers which don't have much of
> > independent tests / library functionality, me and other
> >>> developers
> > believe that it would be good to assimilate this code in the
> > repository for ease of contribution and changes without having
> >>> to go
> > trough contortions to test PRs that modify mshadow.
> >
> > Would anybody oppose this change?
> >
> > Thanks and have a nice weekend.
> >
> > Pedro.
> >
> 
> >>>
> >>
> >
> 
> >>>


  1   2   3   4   5   6   7   8   9   10   >