Re: CUDA recommendation

2019-05-24 Thread Sheng Zha
10.1 is recommended. The oldest CUDA version that we release is 8.0.

-sz

On 2019/05/24 23:29:38, Marco de Abreu  wrote: 
> While we are at the topic, did we actually agree on dropping support for
> some versions? So far we are releasing all the way been to cuda 7.5 I think
> 
> -Marco
> 
> Skalicky, Sam  schrieb am Fr., 24. Mai 2019,
> 23:43:
> 
> > Hi Aaron
> >
> > Right now, the most stable version is CUDA 9.2. CUDA 10 is supported and
> > some pip wheels are available, but there are known performance issues. And
> > we are quickly moving to CUDA 10.1. So things are still in flux now. I
> > think the best approach would be to wait a couple more weeks before
> > updating this part of the docs.
> >
> > Sam
> >
> >
> > > On May 24, 2019, at 2:09 PM, Aaron Markham 
> > wrote:
> > >
> > > What version of CUDA is currently recommended?
> > > Now that there are packages for CUDA 10, shouldn't the build from
> > > source (and other) documentation reflect the latest greatest
> > > combinations?
> > >
> > > Specifically the Ubuntu guide [1] states: "CUDA 9.2 is recommended."
> > >
> > > [1]
> > https://mxnet.incubator.apache.org/versions/master/install/ubuntu_setup.html#cuda-dependencies
> > >
> > > Cheers,
> > > Aaron
> >
> >
> 


[Announcement] New Committer - Jeremie Desgagne-Bouchard

2019-05-23 Thread Sheng Zha
Hi all,

Please join me in welcoming Jeremie Desgagne-Bouchard as a new committer of 
Apache MXNet (incubating)!

Jeremie has been a core contributor to the R binding of MXNet and has been a 
great help to the R
community in MXNet.

Welcome, Jeremie!

-sz


[Announcement] New Committer - Yuxi Hu

2019-05-23 Thread Sheng Zha
Hi all,

Please join me in welcoming Yuxi (Darren) Hu as a new committer of Apache MXNet 
(incubating)!

Yuxi has been one of the core contributors of Horovod integration in MXNet. 
Along the way, he has
been making meaningful contributions to improve the mxnet backend, such as 
introducing API for
engine push to make it easier to integrate horovod and external operator 
library.

Welcome, Darren!

-sz



[Announcement] New Committer - Ding Kuo

2019-05-23 Thread Sheng Zha
Hi all,

Please join me in welcoming Ding Kuo as a new committer of Apache MXNet 
(incubating)!

Ding is well-known in MXNet community as @chinakook. He has been a great 
advocate for MXNet over the
years. Besides contributing to MXNet in GitHub, he maintains the Awesome-MXNet 
repo [1] for the list
of great resources for MXNet.
You can often find chinakook on Zhihu (the most popular Quora style SNS in 
China) advocating for
MXNet and its ecosystems [2].

Welcome, Ding!

[1] https://github.com/chinakook/Awesome-MXNet
[2] https://www.zhihu.com/search?type=content=chinakook



[Announcement] New Committer - Aston Zhang

2019-05-23 Thread Sheng Zha
Hi all,

Please join me in welcoming Aston Zhang as a new committer of Apache MXNet 
(incubating)!

Aston has been quite active in helping the community grow. Moreover, he helps 
create the book "Dive
into Deep Learning" [1], which is great interactive material for introduction 
of deep learning,
developed in MXNet.

Welcome, Aston!

-sz

[1] http://d2l.ai


Re: warnings as errors

2019-05-21 Thread Sheng Zha
It would be great to enforce the check for warnings and treat as errors. Some 
questions I have:
- what are the warnings that you think should be ignored?
- for the rest of the warning types, can we turn them on one by one?

-sz

On 2019/05/21 22:33:51, Pedro Larroy  wrote: 
> Hi dev@
> 
> I try to fix any warning that I see during compilation of MXNet in my
> platform and with the build toggles that I care about. These seemingly
> trivial and ungrateful efforts, take nonetheless energy on the
> contributor side.
> 
> I think overall I submitted myself more than a dozen of PRs fixing
> warnings and I would like to call for additional help and
> contributions in this area.
> 
> There was a question from Lin about discussing this on the mailing
> list, I have the feeling that everybody agrees on moving towards zero
> warnings and warnings as errors. I think there are unavoidable
> warnings that can be disabled specifically such as the one triggered
> by mshadow type switch.
> 
> Some important missing warnings such as warning on missing return
> values (ie. forgetting to return on a function returning non-void)
> cause bugs, danger and additional time spent bugfixing, which can be
> better spent somewhere else.
> 
> Is there a process that we can figure out such as a more expedited
> merges of PRs fixing warnings or a specific label?
> 
> Some simple PRs that fixes a warning can take long to merge, and
> sometimes trigger too much discussion and make the progress a bit
> unfriendly to contributors.
> 
> Any help or constructive ideas on this topic would be appreciated.
> 
> Pedro.
> 


Re: [RFC] Support for creation of Large Tensors in MXNet

2019-05-18 Thread Sheng Zha
Thanks for clarifying. This seems like a duplicate of [1] (though there wasn't 
any feedback there). I think everyone already agrees on the goal. 

> Currently, we assume the max size of each dimension.

I agree with Tao that int64_t would be necessary given that it's common to 
flatten and reshape ndarrays.

To help avoid repeating discussion and to make this discussion more productive, 
here are some of the relevant context that I'm aware of:
- The first part of the proposed change was merged in #11742 which caused 
#14496, i.e. performance degredation in transpose and imdecode. The full scope 
is still unclear.
- A compilation flag was added in #14570 so that people can explicitly opt in 
for the support without impacting others using the default setting.

Given the context, since the goal is to support large tensor by default without 
performance impact, I hope more investigation could accompany this proposal 
that covers:
- The problem: list the parts (e.g. operators) whose performance is impacted by 
changing the index type, and the amount of slow-down.
- The solution for addressing the slow-down.

Thanks.

-sz

[1] 
https://lists.apache.org/thread.html/52b784cf85f89a22355e195fc88b01992fb1993a6f08499a46fa1ff8@%3Cdev.mxnet.apache.org%3E

On 2019/05/19 02:43:39, "Srivastava, Rohit Kumar" 
 wrote: 
> Hi Tao,
> Existing MXNet implementation doesn't support large tensors. MXNet 
> NDArray creation for tensors of sizes larger than 2^32 is only supported by 
> enabling a build flag for now. The purpose of this thread is to have the 
> community provide feedback on the design cwiki for *Large Tensor Support* in 
> MXNet. The intension is to make large tensor support as default feature in 
> MXNet (in future) w/o any performance impact so consumers do not have to 
> build it from source. 
> 
> -Rohit
> 
> On 5/18/19, 5:59 PM, "Lv, Tao A"  wrote:
> 
> Hi Rohit,
> 
> The existing MKL-DNN and its integration in MXNet should already support 
> *large tensor* which means the total number of elements (Prod(shape)) can 
> exceed INT_MAX. Feel free to me know if you find any issue when using MKL-DNN 
> operators with large tensors.
> 
> For large dimension size (shape[x]), MKL-DNN is going to support in its 
> 1.0 release and will be released at the middle of year. But I'm not sure if 
> MXNet has plan to support that.
> 
> Thanks,
> -tao
> 
> -Original Message-
> From: Srivastava, Rohit Kumar [mailto:srivastava@buckeyemail.osu.edu] 
> Sent: Sunday, May 19, 2019 7:23 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [RFC] Support for creation of Large Tensors in MXNet
> 
> Hi Tao,
> There are already couple of operators implemented in MXNet that are 
> currently supporting Tensors with size over ~4.5 billion. In the meantime 
> core MXNet can move ahead with providing initial support for such large 
> tensors so MXNet customers can start using it.
> 
> Good to hear MKLDNN will provide support for such cases. Do you have a 
> timeline as to when this feature will be released ?
> 
> -Rohit
> 
> On 4/29/19, 7:18 PM, "Lv, Tao A"  wrote:
> 
> Thank you Lin! I would expect the current MKL-DNN implementation 
> already supports the scenario you mentioned here. Can be verified by this 
> issue: https://github.com/apache/incubator-mxnet/issues/13451
> 
> But as I said before, since we support flatten or reshape operators, 
> so it's possible for users to convert a tensor with large element size to a 
> tensor with large dimension size. It possibly will cause issue there.
> 
> To cover more cases, MKL-DNN is going to support INT64 dimension size 
> in its coming 1.0 major release.
> 
> -tao
> 
> -Original Message-
> From: Lin Yuan [mailto:apefor...@gmail.com] 
> Sent: Tuesday, April 30, 2019 12:56 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [RFC] Support for creation of Large Tensors in MXNet
> 
> Tao,
> 
> - what's the max size of dimensionality? Which data type is used to 
> define dimensionality (ndims)?
> We assume the max size of dimensionality is relatively small. Hence 
> `int` data type is used to define ndim
> 
> - what's the max size of each dimension? Which data type is used to 
> define dimension size (shape[x])?
> Currently, we assume the max size of each dimension is not going to 
> exceed
> 2^31 in real applications. Hence the data type is `int32_t`
> 
> - what's the max size of total elements? Which data type is used to 
> define element size (Prod(shape))?
> We assume the total number of elements in a tensor can be larger than 
> 2^32 in some applications such as deep graph library. We use the data type 
> `int64_t` to represent the total element size. Currently due to 

Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Sheng Zha
Hi Pedro,

Thanks for taking the inititaive. Skimming through the design doc, I didn't see 
comparison with existing solutions such as relay in tvm, which is already a 
dependency of mxnet already. Could you elaborate on comparison with existing 
solutions in the design doc too?

-sz

On 2019/05/14 23:49:30, Pedro Larroy  wrote: 
> Hi dev@
> 
> As a result of my deep dives on the graph machinery I have created a
> new proposal to improve the operator graph in MXNet.
> 
> This would mean superseding the use of NNVM Graph in MXNet and having
> a new implementation that we can use to simplify a lot of code and do
> powerful graph manipulation and passes such as operator fusion and
> other optimizations.
> 
> As it would be a change with big impact and ramifications, your
> thoughts and feedback on the document would be highly appreciated so
> we can take potential future interesting use cases:
> 
> https://cwiki.apache.org/confluence/display/MXNET/MXVM%3A+Operator+graph+2.0
> 
> Pedro.
> 


Re: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-13 Thread Sheng Zha
Thanks to the help from mentors, our vote on general@incubator is set to pass.

I'm sharing the issues mentioned in the vote that need us to fix before next 
release:
- Standard way to run rat, see [1]
- cpp-package/example/get_data.sh and similar scripts should use canonical URL 
for the MNIST data and mention the license (P.S. [2] mentions CC BY-SA 3.0 but 
the original link [3] didn't mention license. we may need to clarify this first)

And thank all who contributed to this release and thanks to Junru who drove the 
release work.

-sz

[1] https://github.com/apache/incubator-mxnet/issues/14936
[2] http://www.pymvpa.org/datadb/mnist.html
[3] http://yann.lecun.com/exdb/mnist/

On 2019/05/09 16:55:06, Hen  wrote: 
> Noting that I am a belated +1 on the release.
> 
> I had one item regarding dataset licensing that I’d like to see improved
> for the next release, but I don’t believe it would have been a blocker.
> 
> Hen
> 
> On Sat, May 4, 2019 at 00:00 Junru Shao  wrote:
> 
> > Dear MXNet community,
> >
> > I'm happy to announce the results of the vote.
> >
> > This vote passes with 12 +1 votes (3 binding), no 0 votes, and 1 -1 vote.
> > +1 votes
> > * Sheng Zha / binding
> > * Qing Lan / binding
> > * Carin Meier / binding
> > * Aaron Markham
> > * Pedro Larroy
> > * Lai Wei
> > * Damien Stanton
> > * Kellen Sunderland
> > * Yuxi Hu
> > * Joshua Z. Zhang
> > * Philip Hyunsu Cho
> > * Aston Zhang
> >
> > 0 votes
> > * No votes
> >
> > -1 votes
> > * Anirudh Subramanian
> >
> > Vote thread can be found here [1]. The list of members can be found here
> > [2].
> >
> > I'll continue with the release process and the release announcement will
> > follow in the next few days.
> >
> > Best regards,
> > Junru Shao
> >
> > [1]
> >
> > https://lists.apache.org/thread.html/6c140f4c180c259dd1b7f4ecf36f2d083ed810cd68b37d7f635f5614@%3Cdev.mxnet.apache.org%3E
> > [2] http://incubator.apache.org/projects/mxnet.html
> >
> 


Re: Unable to comment on GitHub issue

2019-05-09 Thread Sheng Zha
Locking a conversation wouldn't limit a committer from commenting. "While a 
conversation is locked, only people with write access and repository owners and 
collaborators can add comments." [1]

Unless the apache organization has the blocking setting, blocking by a person 
shouldn't limit one from commenting on issues in mxnet repo either. The 
organization that owns the repo needs to explicitly block the person to be able 
to prevent one from commenting on an issue in the repo of that organization. [2]

-sz

[1] https://help.github.com/en/articles/locking-conversations
[2] 
https://help.github.com/en/articles/blocking-a-user-from-your-personal-account

On 2019/05/09 23:33:00, Aaron Markham  wrote: 
> I just locked one of the issues I created:
> https://github.com/apache/incubator-mxnet/issues/14918
> Are you sure you don't have the unlock button on the right side?
> You should see this:
> 
> aaronmarkham locked as off topic and limited conversation to
> collaborators 24 seconds from now
> 
> Then to the right of that:
> 
>  Unlock conversation
>  Pin issue
> 
> On Thu, May 9, 2019 at 4:27 PM Naveen Swamy  wrote:
> >
> > I don't see the option, another possible explanation someone must have 
> > blocked me, if that is the case it goes against the ethos of Open source.
> > Apache infra should override that setting for Apache projects. Anyway I 
> > created this Jira.
> > https://issues.apache.org/jira/plugins/servlet/mobile#issue/INFRA-18356
> >
> > -Naveen
> >
> > > On May 9, 2019, at 4:19 PM, Aaron Markham  
> > > wrote:
> > >
> > > A new feature: https://help.github.com/en/articles/locking-conversations
> > > So someone must have locked it. I can see the option on the right hand
> > > side column, all the way at the bottom. You will probably have the
> > > ability to unlock it from there too.
> > >
> > >> On Thu, May 9, 2019 at 3:42 PM Chaitanya Bapat  
> > >> wrote:
> > >>
> > >> Any specific issues you could give the links to? So I could verify if
> > >> that's the case with me.
> > >>
> > >>> On Thu, 9 May 2019 at 14:44, Naveen Swamy  wrote:
> > >>>
> > >>> I am unable to comment on certain GitHub issues and see a locked Icon,
> > >>> wondering if anyone has experienced this and know why?
> > >>>
> > >>
> > >>
> > >> --
> > >> *Chaitanya Prakash Bapat*
> > >> *+1 (973) 953-6299*
> > >>
> > >> [image: https://www.linkedin.com//in/chaibapat25]
> > >> [image: 
> > >> https://www.facebook.com/chaibapat]
> > >> [image:
> > >> https://twitter.com/ChaiBapchya] [image:
> > >> https://www.linkedin.com//in/chaibapat25]
> > >> 
> 


Re: [QUESTION] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-08 Thread Sheng Zha
MNIST dataset is under CC BY-SA 3.0 and is widely redistributed.


-sz

On Wed, May 8, 2019 at 12:57 PM Hen  wrote:

> Looking at
> apache-mxnet-src-1.4.1.rc0-incubating/cpp-package/example/get_data.sh -
> what's the license on the data that is being pulled in?
>
> Namely:
>
> "
>
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/train-images-idx3-ubyte.gz
> "
> "
>
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/train-labels-idx1-ubyte.gz
> "
> "
>
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/t10k-images-idx3-ubyte.gz
> "
> "
>
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/t10k-labels-idx1-ubyte.gz
> "
> "http://data.mxnet.io/data/mnist_train.csv.gz;
>
> Thanks,
>
> Hen
>
> On Mon, Apr 29, 2019 at 11:52 PM Junru Shao 
> wrote:
>
> > Dear MXNet community,
> >
> > This is the 3-day vote to release Apache MXNet (incubating) version
> v1.4.1.
> > The voting on dev@ list will start Apr 29 23:59:59 (PST) and close on
> May
> > 02 23:59:59.
> >
> > Below are links to
> > 1) Release notes:
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.1+Release+Notes
> > .
> > 2) Release Candidate:
> > https://github.com/apache/incubator-mxnet/releases/tag/1.4.1.rc0.
> > 3) Source and signatures on Apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.1.rc0/.
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> > Best regards,
> > Junru Shao
> >
>


Re: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-07 Thread Sheng Zha
Correction: Joshua Zhi Zhang is a PPMC member and thus his vote is binding too, 
which brings the +1 binding votes to 4.

-sz

On 2019/05/04 07:23:17, Junru Shao  wrote: 
> As Anirudh changes his vote from -1 to 0 in the voting thread just now, the
> vote results changes to 12 +1 votes (3 binding), 1 0 vote, and no -1 vote.
> 
> Thank you guys again for your hard work testing the release! I will start a
> voting thread on general@.
> 
> Thanks,
> Junru
> 
> 
> On Fri, May 3, 2019 at 11:59 PM Junru Shao  wrote:
> 
> > Dear MXNet community,
> >
> > I'm happy to announce the results of the vote.
> >
> > This vote passes with 12 +1 votes (3 binding), no 0 votes, and 1 -1 vote.
> > +1 votes
> > * Sheng Zha / binding
> > * Qing Lan / binding
> > * Carin Meier / binding
> > * Aaron Markham
> > * Pedro Larroy
> > * Lai Wei
> > * Damien Stanton
> > * Kellen Sunderland
> > * Yuxi Hu
> > * Joshua Z. Zhang
> > * Philip Hyunsu Cho
> > * Aston Zhang
> >
> > 0 votes
> > * No votes
> >
> > -1 votes
> > * Anirudh Subramanian
> >
> > Vote thread can be found here [1]. The list of members can be found here
> > [2].
> >
> > I'll continue with the release process and the release announcement will
> > follow in the next few days.
> >
> > Best regards,
> > Junru Shao
> >
> > [1]
> > https://lists.apache.org/thread.html/6c140f4c180c259dd1b7f4ecf36f2d083ed810cd68b37d7f635f5614@%3Cdev.mxnet.apache.org%3E
> > [2] http://incubator.apache.org/projects/mxnet.html
> >
> 


[DISCUSS] 1.5.0 Release Plan

2019-05-07 Thread Sheng Zha
Hi,

While 1.4.1 vote on general@incubator is still on going, I’d like to propose 
that we start preparing 1.5.0 release.

1.5.0 will include changes that dates back to last year and there has been a 
lot of new features and improvements in it, so it will likely time us more time 
to prepare than 1.4.1. I propose the following timeline:
- Cut release branch: release branch already cut. Will sync with master branch 
on 5/15/2019 EOD.
- Code freeze: 5/17/2019. No more changes unless the release branch is in a 
broken state.
- Tag and vote: 5/20/2019 onward.

Lai Wei (roywei@) expressed to me offline that he’s willing to help drive this 
release as release manager, and I’m happy to help again as committer.

If you have features in progress that you’d like to include in 1.5.0:
- Add your feature to the scope: 
https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status
- Indicate in this thread:
  - how confident you are about making it happen before the code freeze. If not 
confident, provide estimate for a more manageable code freeze date so that 
people can discuss whether to extend the deadline or to skip one release for it.
- whether your PR requires more attention to make it happen.

Thanks for your attention. Comments and suggestions are also welcome.

-sz

Re: mxnet slack access

2019-05-06 Thread Sheng Zha
Just invited you. Welcome!

-sz

On 2019/05/06 11:21:49, Geoff Bull  wrote: 
> Please invite me to slack.
> 
> Thanks
> Geoff
> 
> 


Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-06 Thread Sheng Zha
Hi Konstantin,

Thanks for your reply.

> I personally prefer small incremental changes, and starting with some MVP, 
> where M is
> minimal, meaning least possible effort (e.g. number of dependencies) used
> from conan.

Agreed. That said, this seems like a case where adoption decision can't be 
based on MVP, as having one additional dependency to be able to automatically 
download 3 out of some 15 dependencies doesn't seem to be a desirable state to 
be in.
A feature branch is created for you and others for collaboration so that we can 
not only ensure the coverage but also ensure that there's enough people who are 
willing to push this forward.

> may we define an actual scope then

As mentioned in my last reply, I'd recommend solving one of the two main use 
cases of mxnet builds. The efforts should also make it possible to remove these 
scripts here: https://github.com/apache/incubator-mxnet/tree/master/setup-utils.

> but, I am not sure, for me it looks like strange use-case. I can imagine
> you have developers with no internet connection and send them parcels with
> CDs of mxnet source code, but I think it's hard to manage such workflow
> with any tool, no matter submodules, CMake download or conan.

Thanks for the clarificiation. This use case is acutally not that rare. People 
may need to build mxnet to optimize for performance on their custom hardware, 
in a sandboxed environment for security reasons. Submodules can solve it by 
including the source code. Since the offline use case is not what conan is 
designed for, we just need to make sure that conan is not a required build 
tool, and alternatives still work in this use case. I made relevant comment in 
the PR too.

To sum up, I think conan is a tool that can simplify the dependency management 
in mxnet from a concept level. We should make sure that it can provide the 
coverage needed (e.g. have the packages we need with the versions we need and 
fast turnaround for upgrading dependencies), and make sure it doesn't break 
other build needs. For now let's utilize the feature branch to collaborate with 
people who share the interest and are willing to help out with the goal to make 
sure we have a full coverage at the time of adoption.

Best,
-sz

On 2019/05/06 08:10:29, Konstantin Ivlev  wrote: 
> Hi Sheng Zha,
> 
> >  Currently, the linked PR only includes OpenBLAS
> actually, it's OpenBLAS + OpenCV + lapack, three libraries as
> proof-of-concept
> 
> > A proof-of-concept that shows it actually replaces more dependency than
> openblas would be helpful
> may we define an actual scope then, e.g. how much dependencies - 2, 3, 5,
> half of them or all of them?
> 
> > If the value proposition of conan is to simplify the dependency
> management, then it should unify other solutions instead of keeping these
> solutions around.
> personally, I don't think it's easy to do in single shot. I personally
> prefer small incremental changes, and starting with some MVP, where M is
> minimal, meaning least possible effort (e.g. number of dependencies) used
> from conan. then eventually migrate other dependencies to conan one by one,
> until it fully migrates. but that's just my personal vision, if you see
> that this strategy is wrong, it's okay.
> 
> - It's unclear how it impacts people with only an archive of mxnet source
> code but without network access.
> as I have tried, conan doesn't change that much. if you have no network
> access, and only mxnet source code, then "git submodule init" will fail for
> you. if you have also archives of dependencies somewhere on your
> hard-drive, you can manage submodules to clone from the local repository
> rather GitHub. but then you will stuck at the CMake generation step, which
> will fail to download Intel MKL.
> if you use conan instead of submodules and CMake download, there is not
> much difference - it will fail to fetch same dependencies without internet
> connection on clean machine. if you somehow happen to have ab archive of
> conan cache on your hard-drive, you may unpack it and use without internet
> connection.
> but, I am not sure, for me it looks like strange use-case. I can imagine
> you have developers with no internet connection and send them parcels with
> CDs of mxnet source code, but I think it's hard to manage such workflow
> with any tool, no matter submodules, CMake download or conan.
> 
> yours sincerely, Konstantin
> 
> 
> вс, 5 мая 2019 г. в 06:25, Sheng Zha :
> 
> > To be clear, my intention is really to prevent a seemingly good solution
> > to exacerbate the problem that it sets out to solve. This tends to happen
> > when there are not enough people to drive it to the end.
> >
> > If there are additional values in this solution that people feel outweighs
> > the problems 

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-04 Thread Sheng Zha
To be clear, my intention is really to prevent a seemingly good solution to 
exacerbate the problem that it sets out to solve. This tends to happen when 
there are not enough people to drive it to the end.

If there are additional values in this solution that people feel outweighs the 
problems below, I'd be more than happy to be persuaded to vote otherwise.

-sz

On 2019/05/04 23:08:43, Sheng Zha  wrote: 
> Thank you for the explanation and sorry that I missed the earlier context as 
> it has been a while. While I like the idea of simplifying the dependency 
> management with tools like conan, I have the following concerns on this vote 
> as-is (it's also my take on why I think the PR is stuck):
> 
> - It's unclear how much dependency needs can conan help in mxnet builds.
>   Currently, the linked PR only includes OpenBLAS. A proof-of-concept that 
> shows it actually replaces more dependency than openblas would be helpful. On 
> a high-level, there are two types of builds for mxnet:
>   * User's custom build-from-source: 1) usually dynamic linking is used. 2) 
> users may not enable all features, and users may want to pull a subset of the 
> dependencies. 3) users may want mxnet build system to pull the dependencies, 
> or they may not. (for conan it's ok to just focus on the former)
>   * Binary distribution for pip and maven: 1) static linking is used 
> (requires -fPIC). 2) all features are enabled. 3) dependencies are pulled in 
> with scripts in mxnet.
>   Handling one of the above cases would be a good showcase for the value of 
> the new tool.
> 
> - It's unclear how it impacts people with only an archive of mxnet source 
> code but without network access.
>   This applies for the dependencies that are captured as submodules that you 
> mentioned as a way that mxnet manages dependency.
> 
> - If the value proposition of conan is to simplify the dependency management, 
> then it should unify other solutions instead of keeping these solutions 
> around.
> 
> Overall, it would be helpful to have a clear message such as what exactly 
> conan can replace, and having a proof of concept that works for this would be 
> helpful. Otherwise, I fear that we may be introducing yet another way to 
> manage dependency that further complicates the existing problem.
> 
> That said, I'm not suggesting that we impose the burden to implement 
> everything on you alone, and it's ok to rally people who are interested in 
> this solution to help out. To facilitate this, I created a feature branch so 
> that it's easier for you and people who are enthusiastic about this to work 
> together [1].
> 
> For now, I'm voting -1 to this proposal and I hope you understand.
> 
> -sz
> 
> [1] https://github.com/apache/incubator-mxnet/tree/conan
> 
> On 2019/05/03 07:51:34, Konstantin Ivlev  wrote: 
> > hi Sheng Zha,
> > 
> > on pull request review I was told by Anirudh anirudhacharya and Roshani
> > Nagmote to start discussion/vote on the mxnet dev list. it seems to be a
> > vicious circle now - on GitHub I am told to use vote, and on vote I am told
> > to use GitHub, this doesn't help much.
> > FYI GitHub review stuck, it's already opened since November 2018, and it's
> > still not approved (however, there were no objections during the review).
> > Previous discussion in e-mail thread also didn't encounter any objections,
> > and all questions were answered.
> > JIRA ticket has no discussion at all (except it has duplicates of comments
> > made on GitHub).
> > so let's process with 3-day vote for now, as other communication channels
> > were already tried with no success.
> > 
> > yours sincerely, Konstantin
> > 
> > пт, 3 мая 2019 г. в 14:17, Sheng Zha :
> > 
> > > Hi Konstantin,
> > >
> > > While conan looks like an option that's worth exploring, given that your
> > > request is to merge the pull request, I'd suggest that the request should
> > > go through the regular pull request review and it doesn't really need a
> > > vote (as it doesn't substitute reviews anyway)
> > >
> > > If you would like to gather more attention to it, feel free to ping in a
> > > discussion thread.
> > >
> > > -sz
> > >
> > > On 2019/05/03 06:29:55, Konstantin Ivlev  wrote:
> > > > Dear MXNet community,
> > > >
> > > > This is the 3-day vote to add conan support for Apache MXNet 
> > > > (incubating)
> > > > version v1.4.1.
> > > > The voting on dev@ list will start May 03 23:59:59 (PST) and close on
> > > May
> > > > 06 23:59:59.
> > > >
> > > > Background: conan is 

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-04 Thread Sheng Zha
Thank you for the explanation and sorry that I missed the earlier context as it 
has been a while. While I like the idea of simplifying the dependency 
management with tools like conan, I have the following concerns on this vote 
as-is (it's also my take on why I think the PR is stuck):

- It's unclear how much dependency needs can conan help in mxnet builds.
  Currently, the linked PR only includes OpenBLAS. A proof-of-concept that 
shows it actually replaces more dependency than openblas would be helpful. On a 
high-level, there are two types of builds for mxnet:
  * User's custom build-from-source: 1) usually dynamic linking is used. 2) 
users may not enable all features, and users may want to pull a subset of the 
dependencies. 3) users may want mxnet build system to pull the dependencies, or 
they may not. (for conan it's ok to just focus on the former)
  * Binary distribution for pip and maven: 1) static linking is used (requires 
-fPIC). 2) all features are enabled. 3) dependencies are pulled in with scripts 
in mxnet.
  Handling one of the above cases would be a good showcase for the value of the 
new tool.

- It's unclear how it impacts people with only an archive of mxnet source code 
but without network access.
  This applies for the dependencies that are captured as submodules that you 
mentioned as a way that mxnet manages dependency.

- If the value proposition of conan is to simplify the dependency management, 
then it should unify other solutions instead of keeping these solutions around.

Overall, it would be helpful to have a clear message such as what exactly conan 
can replace, and having a proof of concept that works for this would be 
helpful. Otherwise, I fear that we may be introducing yet another way to manage 
dependency that further complicates the existing problem.

That said, I'm not suggesting that we impose the burden to implement everything 
on you alone, and it's ok to rally people who are interested in this solution 
to help out. To facilitate this, I created a feature branch so that it's easier 
for you and people who are enthusiastic about this to work together [1].

For now, I'm voting -1 to this proposal and I hope you understand.

-sz

[1] https://github.com/apache/incubator-mxnet/tree/conan

On 2019/05/03 07:51:34, Konstantin Ivlev  wrote: 
> hi Sheng Zha,
> 
> on pull request review I was told by Anirudh anirudhacharya and Roshani
> Nagmote to start discussion/vote on the mxnet dev list. it seems to be a
> vicious circle now - on GitHub I am told to use vote, and on vote I am told
> to use GitHub, this doesn't help much.
> FYI GitHub review stuck, it's already opened since November 2018, and it's
> still not approved (however, there were no objections during the review).
> Previous discussion in e-mail thread also didn't encounter any objections,
> and all questions were answered.
> JIRA ticket has no discussion at all (except it has duplicates of comments
> made on GitHub).
> so let's process with 3-day vote for now, as other communication channels
> were already tried with no success.
> 
> yours sincerely, Konstantin
> 
> пт, 3 мая 2019 г. в 14:17, Sheng Zha :
> 
> > Hi Konstantin,
> >
> > While conan looks like an option that's worth exploring, given that your
> > request is to merge the pull request, I'd suggest that the request should
> > go through the regular pull request review and it doesn't really need a
> > vote (as it doesn't substitute reviews anyway)
> >
> > If you would like to gather more attention to it, feel free to ping in a
> > discussion thread.
> >
> > -sz
> >
> > On 2019/05/03 06:29:55, Konstantin Ivlev  wrote:
> > > Dear MXNet community,
> > >
> > > This is the 3-day vote to add conan support for Apache MXNet (incubating)
> > > version v1.4.1.
> > > The voting on dev@ list will start May 03 23:59:59 (PST) and close on
> > May
> > > 06 23:59:59.
> > >
> > > Background: conan is open-source, freeware, cross-platform package
> > manager
> > > for C and C++ projects, written in python. it provides integration with
> > > various build systems, include CMake. conan may use bintray as a server
> > to
> > > store and download pre-built packages, or packages might be always built
> > > from sources.
> > >
> > > Problem: currently (as for v1.4.1), Apache MXNet (incubating) is using
> > > several ways to fetch 3rd-party dependencies simultaneously, for
> > instance:
> > > 1. download GitHub archives during the build
> > > - OpenBLAS
> > > - OpenCV
> > > 2. conda (alternative way to GitHub archives)
> > > 3. download from CMake
> > > - Intel Math Kernel Library (MKL)
> > > 4. Git submodules
> > &

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread Sheng Zha
Hi Kellen,

Of course, feel free to count in my vote if that’s ok. Since I helped prepare 
the artifacts I wasn’t sure if it was appropriate for me to vote so I refrained 
from voting till now.

+1

-sz

> On May 3, 2019, at 12:19 AM, kellen sunderland  
> wrote:
> 
> Hi Junru could you give a quick summary of the binding / non-binding votes.
> 
> Damien just want to confirm, are you a member of the PPMC for MXNet?
> Usually committers or community members (like most of us) are encouraged to
> test and vote, but technically count as non-binding for releases.
> 
> Sheng can we assume you're +1 on the release?
> 
>> On Fri, May 3, 2019 at 12:09 AM Junru Shao  wrote:
>> 
>> Hi folks,
>> 
>> So far we have collected enough binding votes. Thank you guys for the hard
>> work testing the release!
>> 
>> The vote on dev@ is closed on May 02 23:59:59 (PST). Next, we are going to
>> vote for the Apache MXNet (incubating) release 1.4.1 on general@ tomorrow,
>> which starts on May 3 2019, 23:59:59 PST, and ends on May 07 2019, 23:59:59
>> PST.
>> 
>> Best,
>> Junru
>> 
>>> On Thu, May 2, 2019 at 11:29 PM Aston Zhang  wrote:
>>> 
>>> +1 (non-binding)
>>> 
>>> Passed all the code at zh.d2l.ai
>>> 
>>> On Thu, May 2, 2019 at 1:46 PM Joshua Z. Zhang 
>>> wrote:
>>> 
>>>> +1 (non-binding)
>>>> 
>>>> Build from source with cuda/cudnn.
>>>> 
>>>> - All tests passed
>>>> - GluonCV unittest scripts passed
>>>> - GluonCV training scripts passed
>>>> - No issue with python multiprocessing
>>>> 
>>>> Best,
>>>> Zhi
>>>>> On May 2, 2019, at 11:34 AM, kellen sunderland <
>>>> kellen.sunderl...@gmail.com> wrote:
>>>>> 
>>>>> +1 (non-binding)
>>>>> 
>>>>> I checked TRT integration builds and tests pass.
>>>>> MD5s
>>>>> Sigs look good.
>>>>> 
>>>>> -Kellen
>>>>> 
>>>>> On Thu, May 2, 2019 at 10:51 AM Damien Stanton <
>>> damien.stan...@gmail.com
>>>>> 
>>>>> wrote:
>>>>> 
>>>>>> +1 (binding)
>>>>>> 
>>>>>> Built from source / Scala / Clojure. All tests pass. The only issue
>> of
>>>>>> minor note: The macOS build guide indicates a directive `brew
>> install
>>>>>> opencv` however this installs OpenCV 4, which is currently
>>> incompatible
>>>>>> with mxnet and causes a failed build. The guide should specify `brew
>>>>>> install opencv@3` until/if version 4 is supported.
>>>>>> 
>>>>>> Best,
>>>>>> Damien
>>>>>> 
>>>>>> On Thu, May 2, 2019 at 12:53 PM Lai Wei 
>> wrote:
>>>>>> 
>>>>>>> +1
>>>>>>> 
>>>>>>> Built from source and tested keras-mxnet working fine.
>>>>>>> 
>>>>>>> Best Regards
>>>>>>> 
>>>>>>> Lai
>>>>>>> 
>>>>>>> 
>>>>>>> On Wed, May 1, 2019 at 4:22 PM Carin Meier 
>>>> wrote:
>>>>>>> 
>>>>>>>> + 1 (binding)
>>>>>>>> 
>>>>>>>> Built Scala/ Clojure and ran tests
>>>>>>>> 
>>>>>>>> On Wed, May 1, 2019 at 7:06 PM Aaron Markham <
>>>>>> aaron.s.mark...@gmail.com>
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Make that +1 (non-binding)
>>>>>>>>> 
>>>>>>>>> On Wed, May 1, 2019 at 3:42 PM Aaron Markham <
>>>>>>> aaron.s.mark...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>> +1 (binding)
>>>>>>>>>> 
>>>>>>>>>> * Built with GPU and tested the first part of the ssd example.
>>>>>>>>>> * Built with GPU / cross-compiled to arm8 for Jetson.
>>>>>>>>>> * Built Scala/Java on top of the cross-compiled arm8 (ran into
>>>>>>> trouble
>>>>>>>>>> here, but I think this is not popular enough yet to derail
>> things,
>>>

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-03 Thread Sheng Zha
Hi Konstantin,

While conan looks like an option that's worth exploring, given that your 
request is to merge the pull request, I'd suggest that the request should go 
through the regular pull request review and it doesn't really need a vote (as 
it doesn't substitute reviews anyway)

If you would like to gather more attention to it, feel free to ping in a 
discussion thread.

-sz

On 2019/05/03 06:29:55, Konstantin Ivlev  wrote: 
> Dear MXNet community,
> 
> This is the 3-day vote to add conan support for Apache MXNet (incubating)
> version v1.4.1.
> The voting on dev@ list will start May 03 23:59:59 (PST) and close on May
> 06 23:59:59.
> 
> Background: conan is open-source, freeware, cross-platform package manager
> for C and C++ projects, written in python. it provides integration with
> various build systems, include CMake. conan may use bintray as a server to
> store and download pre-built packages, or packages might be always built
> from sources.
> 
> Problem: currently (as for v1.4.1), Apache MXNet (incubating) is using
> several ways to fetch 3rd-party dependencies simultaneously, for instance:
> 1. download GitHub archives during the build
> - OpenBLAS
> - OpenCV
> 2. conda (alternative way to GitHub archives)
> 3. download from CMake
> - Intel Math Kernel Library (MKL)
> 4. Git submodules
> - cub
> - dlpack
> - dmlc-core
> - googletest
> - mkldnn
> - mshadow
> - onnx-tensorrt
> - openmp
> - ps-lite
> - tvm
> therefore, there are multiple places to look for 3rd parties, and its hard
> to update them, as you need to remember or figure it out how to update a
> particular dependency to newer version, for instance.
> current Apache MXNet (incubating) build instructions differ very much per
> platform, require to download and unzip some archives manually, specifying
> variables with paths to this archives, in conjunction of updating git
> submodules,
> 
> Action: merge pull request providing an initial conan support for Apache
> MXNet (incubating). support conan as an alternate approach to fetch various
> 3rd-party dependencies. old approaches will be still available, supported
> and left intact.
> 
> Below are links to
> 1) conan web-site:  https://conan.io/
> 2) conan GitHub repository: https://github.com/conan-io/conan
> 3) conan documentation: https://docs.conan.io/en/latest/
> 4) bintray: https://bintray.com
> 5) pull request adding conan support to Apache MXNet (incubating):
> https://github.com/apache/incubator-mxnet/pull/13400
> 6) JIRA issue: https://issues.apache.org/jira/browse/MXNET-1229
> 7) previous email discussion:
> https://lists.apache.org/thread.html/301a46a637f7e3c249c475713f701bef7530c32bc92d8834c0882897@%3Cdev.mxnet.apache.org%3E
> 8) MXNet build instructions:
> https://mxnet-tqchen.readthedocs.io/en/latest/how_to/build.html
> 9) MXNet build instructions (Windows):
> https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html
> 10) MXNet build instructions (OSX):
> http://mxnet.incubator.apache.org/versions/master/install/osx_setup.html
> 11) MXNet build instructions (Linux):
> http://mxnet.incubator.apache.org/versions/master/install/ubuntu_setup.html
> 12) MXNet development setup (OSX):
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Developer+Setup+on+Mac
> 
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
> 
> Best regards,
> Konstantin Ivlev
> 


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-01 Thread Sheng Zha
Hi all,

Reminder that the vote for 1.4.1 release is still ongoing. If you can, please 
help out. Thank you.

-sz

On 2019/04/30 06:51:45, Junru Shao  wrote: 
> Dear MXNet community,
> 
> This is the 3-day vote to release Apache MXNet (incubating) version v1.4.1.
> The voting on dev@ list will start Apr 29 23:59:59 (PST) and close on May
> 02 23:59:59.
> 
> Below are links to
> 1) Release notes:
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.1+Release+Notes
> .
> 2) Release Candidate:
> https://github.com/apache/incubator-mxnet/releases/tag/1.4.1.rc0.
> 3) Source and signatures on Apache dist server:
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.1.rc0/.
> 
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
> 
> Best regards,
> Junru Shao
> 


Re: Invitation to join the #mxnet slack channel

2019-04-29 Thread Sheng Zha
Invite sent. Welcome!

-sz

> On Apr 29, 2019, at 5:17 PM, Nikhil Kulkarni  wrote:
> 
> Hi,
> 
> I'd like to be a part of the MXNet Slack channel. Please send me an
> invitation for the same.
> 
> Thanks for your time.
> 
> -- 
> Regards,
>  -Nikhil Kulkarni


Re: Podling Report Reminder - May 2019

2019-04-24 Thread Sheng Zha
[1] https://cwiki.apache.org/confluence/display/INCUBATOR/May2019

On 2019/04/25 05:17:01, Sheng Zha  wrote: 
> Dear community,
> 
> I drafted our report and posted it in incubator wiki [1] just now. Feel free 
> to provide feedback by 2019-04-28 (Sunday). Thank you for your attention.
> 
> Best regards,
> Sheng Zha
> 
> On 2019/04/25 01:32:01, jmcl...@apache.org wrote: 
> > Dear podling,
> > 
> > This email was sent by an automated system on behalf of the Apache
> > Incubator PMC. It is an initial reminder to give you plenty of time to
> > prepare your quarterly board report.
> > 
> > The board meeting is scheduled for Wed, 15 May 2019, 10:30 am PDT.
> > The report for your podling will form a part of the Incubator PMC
> > report. The Incubator PMC requires your report to be submitted 2 weeks
> > before the board meeting, to allow sufficient time for review and
> > submission (Wed, May 01).
> > 
> > Please submit your report with sufficient time to allow the Incubator
> > PMC, and subsequently board members to review and digest. Again, the
> > very latest you should submit your report is 2 weeks prior to the board
> > meeting.
> > 
> > Candidate names should not be made public before people are actually
> > elected, so please do not include the names of potential committers or
> > PPMC members in your report.
> > 
> > Thanks,
> > 
> > The Apache Incubator PMC
> > 
> > Submitting your Report
> > 
> > --
> > 
> > Your report should contain the following:
> > 
> > *   Your project name
> > *   A brief description of your project, which assumes no knowledge of
> > the project or necessarily of its field
> > *   A list of the three most important issues to address in the move
> > towards graduation.
> > *   Any issues that the Incubator PMC or ASF Board might wish/need to be
> > aware of
> > *   How has the community developed since the last report
> > *   How has the project developed since the last report.
> > *   How does the podling rate their own maturity.
> > 
> > This should be appended to the Incubator Wiki page at:
> > 
> > https://cwiki.apache.org/confluence/INCUBATOR/May2019
> > 
> > Note: This is manually populated. You may need to wait a little before
> > this page is created from a template.
> > 
> > Mentors
> > ---
> > 
> > Mentors should review reports for their project(s) and sign them off on
> > the Incubator wiki page. Signing off reports shows that you are
> > following the project - projects that are not signed may raise alarms
> > for the Incubator PMC.
> > 
> > Incubator PMC
> > 
> 


Re: Podling Report Reminder - May 2019

2019-04-24 Thread Sheng Zha
Dear community,

I drafted our report and posted it in incubator wiki [1] just now. Feel free to 
provide feedback by 2019-04-28 (Sunday). Thank you for your attention.

Best regards,
Sheng Zha

On 2019/04/25 01:32:01, jmcl...@apache.org wrote: 
> Dear podling,
> 
> This email was sent by an automated system on behalf of the Apache
> Incubator PMC. It is an initial reminder to give you plenty of time to
> prepare your quarterly board report.
> 
> The board meeting is scheduled for Wed, 15 May 2019, 10:30 am PDT.
> The report for your podling will form a part of the Incubator PMC
> report. The Incubator PMC requires your report to be submitted 2 weeks
> before the board meeting, to allow sufficient time for review and
> submission (Wed, May 01).
> 
> Please submit your report with sufficient time to allow the Incubator
> PMC, and subsequently board members to review and digest. Again, the
> very latest you should submit your report is 2 weeks prior to the board
> meeting.
> 
> Candidate names should not be made public before people are actually
> elected, so please do not include the names of potential committers or
> PPMC members in your report.
> 
> Thanks,
> 
> The Apache Incubator PMC
> 
> Submitting your Report
> 
> --
> 
> Your report should contain the following:
> 
> *   Your project name
> *   A brief description of your project, which assumes no knowledge of
> the project or necessarily of its field
> *   A list of the three most important issues to address in the move
> towards graduation.
> *   Any issues that the Incubator PMC or ASF Board might wish/need to be
> aware of
> *   How has the community developed since the last report
> *   How has the project developed since the last report.
> *   How does the podling rate their own maturity.
> 
> This should be appended to the Incubator Wiki page at:
> 
> https://cwiki.apache.org/confluence/INCUBATOR/May2019
> 
> Note: This is manually populated. You may need to wait a little before
> this page is created from a template.
> 
> Mentors
> ---
> 
> Mentors should review reports for their project(s) and sign them off on
> the Incubator wiki page. Signing off reports shows that you are
> following the project - projects that are not signed may raise alarms
> for the Incubator PMC.
> 
> Incubator PMC
> 


Re: assimilation of mshadow into the MXNet codebase

2019-04-24 Thread Sheng Zha
The community has agreed to donate mshadow to the mxnet code base. I will start 
the migration and build logic changes soon.

-sz

On 2019/04/07 21:47:39, Sheng Zha  wrote: 
> I agree it would make development easier to donate mshadow to mxnet code 
> base, since mshadow is only used in MXNet. I support donating the mshadow 
> code to mxnet and I started an RFC for this in mshadow [1].
> 
> [1] https://github.com/dmlc/mshadow/issues/373
> 
> -sz
> 
> On 2019/04/06 04:38:19, Tianqi Chen  wrote: 
> > Technically, mshadow is sufficient for MXNet. Adopting other libraries (
> > eigen or xtensor) will unnecessarily increase the codebase complexity
> > without any additional gains.
> > 
> > Given that mshadow is only used by mxnet. I do support donating it into
> > mxnet codebase.
> > To respect the original mshadow community. I would recommend starting a
> > community RFC In the mshadow github issue for a week, before we start the
> > migrating process.
> > Also, I would recommend a rebase merge just like the case of MXNet.jl code
> > base to preserve the contribution history.
> > 
> > Tianqi
> > 
> > 
> > On Fri, Apr 5, 2019 at 9:25 PM Alfredo Luque
> >  wrote:
> > 
> > > Do you have a link to both of these proposals?
> > >
> > > On Fri, Apr 5, 2019 at 20:14 Anirudh Acharya 
> > > wrote:
> > >
> > > > Hi Pedro,
> > > >
> > > > mshadow is mostly used for tensor arithmetic. There have been 
> > > > discussions
> > > > about including it within mxnet. I think it is a good idea.
> > > >
> > > > As a more long term solution using libraries like eigen to perform 
> > > > linear
> > > > algebra operations was also suggested by anirudh2290@. I think xtensor(
> > > > https://github.com/QuantStack/xtensor ) can also be a candidate here.
> > > >
> > > > -
> > > > Anirudh
> > > >
> > > >
> > > > On Fri, Apr 5, 2019 at 7:03 PM Pedro Larroy <
> > > pedro.larroy.li...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi
> > > > >
> > > > > Some developers have noticed that working in mshadow is cumbersome as
> > > > > it's a 3rdparty subrepo.
> > > > >
> > > > > Since mshadow is a bunch of headers which don't have much of
> > > > > independent tests / library functionality, me and other developers
> > > > > believe that it would be good to assimilate this code in the
> > > > > repository for ease of contribution and changes without having to go
> > > > > trough contortions to test PRs that modify mshadow.
> > > > >
> > > > > Would anybody oppose this change?
> > > > >
> > > > > Thanks and have a nice weekend.
> > > > >
> > > > > Pedro.
> > > > >
> > > >
> > >
> > 
> 


Re: Interested in MXNet Development

2019-04-22 Thread Sheng Zha
Hi Damien,

Welcome to MXNet. I just sent invite for apache slack.

Feel free to ask here if you have development related questions. Many
people here are more than happy to help you get started.

-sz

On Mon, Apr 22, 2019 at 11:48 AM Damien Stanton 
wrote:

> Apologies - this request was specifically for the MXNet Slack invite
> (message sent before I was completed).
>
> On Mon, Apr 22, 2019 at 2:40 PM Damien Stanton 
> wrote:
>
> > Specifically, I am interested in MXNet's applicability in the Kotlin,
> > Rust, and Clojure spaces.
> >
> > Thanks!
> > Damien
> >
>


Re: [MXNET 2.0 Wishlist] [DISCUSS] Refine the InferStorageType and memory planning pass

2019-04-10 Thread Sheng Zha
Relay is NNVM v2. The main difference between NNVM and Relay is that the former 
can represent control flow graph. Translating the suggested optimization pass 
in this thread from NNVM to relay should be straightforward. Given that I’d 
also suggest to start early with NNVM.

-sz

> On Apr 10, 2019, at 8:26 AM, Lv, Tao A  wrote:
> 
> 
> @Tianqi,
> 
> Thank you for the information. I will take a look on that to see if we can 
> take some advantages from it.
> 
> @Junru,
> 
> The reason for why we want to hold this change to 2.0 is that we know there 
> is a discussion in TVM community that NNVM will be deprecated soon and then I 
> think MXNet has to move to a new IR either NNVM v2 or Relay. As most changes 
> in this proposal are related to IR passes, we definitely don't want to spend 
> much effort on something which is deprecating. 2.0 seems to be a more 
> appropriate timing for us to make these changes. But I agree with you, we can 
> start to do some experiments on the existing architects and NNVM IR.
> 
> -tao
> 
> -Original Message-
> From: Junru Shao [mailto:junrushao1...@gmail.com] 
> Sent: Wednesday, April 10, 2019 1:34 PM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [MXNET 2.0 Wishlist] [DISCUSS] Refine the InferStorageType and 
> memory planning pass
> 
> Agreed with Tianqi that we could have better implementation once we have 
> better tvm nnvm v2 integration. For now I believe that we shouldn't block the 
> development of Intel folks.
> 
> On Tue, Apr 9, 2019 at 10:10 PM Tianqi Chen 
> wrote:
> 
>> Such kind of conversion can be viewed as an enhanced version of 
>> AlterOpLayout in the TVM relay Pass
>> 
>>> On Tue, Apr 9, 2019 at 8:03 PM Lv, Tao A  wrote:
>>> 
>>> 
>>> Thank you Tianqi and Sam for the kind suggestions.
>>> 
>>> @Tianqi,
>>> 
>>> Can you please point me to the code of this pass or do you think 
>>> anyone from TVM community can help to educate me on this? I'm very 
>>> happy to
>> learn
>>> from that.
>>> 
>>> Just one note, we are not only doing layout transformation but also 
>>> want to have more memory for layout transformation.
>>> For example, (N=32, C=3, H=256, W=256) will be padded to (N=32, 
>>> C=16, H=256, W=256) on channel dimension then convert (N=32, C=16, 
>>> H=256,
>> W=256)
>>> to nchw16c so we can leverage corresponding optimal computation kernels.
>>> That's why we also need changes to the memory planning pass.
>>> 
>>> 
>>> @Sam,
>>> 
>>> Yes, definitely we're treating MKL-DNN as an accelerator on CPU.
>>> Previously we used it to accelerate certain critical operators in 
>>> MXNet
>> in
>>> certain situations, eg. FP32 
>>> convolution/deconvolution/fullyConnected,
>> etc.
>>> But along with the evolving of both MXNet and MKL-DNN, we started to 
>>> do more which might not supported by MXNet in original CPU 
>>> implementation, such as quantization and graph fusion. So MKL-DNN 
>>> backend is also
>> changing
>>> from a simple `accelerator` to a `default` backend on CPU. And I 
>>> totally agree with you that we need think more about the software 
>>> architecture
>> for
>>> maintainability, testability and readability - that's why I sent out 
>>> this proposal to get more ideas from the community.
>>> 
>>> 
>>> -tao
>>> 
>>> -Original Message-
>>> From: Skalicky, Sam [mailto:sska...@amazon.com.INVALID]
>>> Sent: Wednesday, April 10, 2019 2:24 AM
>>> To: dev@mxnet.incubator.apache.org
>>> Subject: Re: [MXNET 2.0 Wishlist] [DISCUSS] Refine the 
>>> InferStorageType and memory planning pass
>>> 
>>> I agree with Tianqi. We should let MKLDNN partitipate in memory 
>>> planning by first having a separate NNVM pass and then using that 
>>> info in the regular memory planning phase.
>>> 
>>> Its starting to sound like MKLDNN should be treated like an 
>>> accelerator rather than an operator library. As it has explicit 
>>> needs and can provide acceleration when given extra capabilities in 
>>> MXNet like having input to the memory planning NNVM pass. It also 
>>> has special tensor formatting
>> needs
>>> and conversions that could be best architected in another way than 
>>> they currently are.
>>> 
>>> We need to think about how we want to architect this for 
>>> maintainability, testability, and readability.
>>> 
>>> Sam
>>> 
>>> 
 On Apr 9, 2019, at 11:11 AM, Tianqi Chen 
 
>>> wrote:
 
 The layout transformation should really be a separate optimization 
 pass rather than memory planning. As is done in the TVM stack. If 
 we want to do a clean slate solution, I would recommend looking 
 into that
>>> instead.
 
 TIanqi
 
> On Tue, Apr 9, 2019 at 1:46 AM Lv, Tao A  wrote:
> 
> 
> 
> Hi dev,
> 
> 
> 
> As we're discussing the roadmap for MXNet 2.0, I would like to 
> start a thread about refining the InferStorageType and memory 
> planning pass in MXNet and hope it can happen as a part of the 2.0 
> release.
> 
> 
> 
> Thanks to 

Re: assimilation of mshadow into the MXNet codebase

2019-04-07 Thread Sheng Zha
mshadow depends on *a* BLAS library, and there's nothing inherent in mshadow 
code base that requires OpenBLAS over MKL. The linked issue #11769 seems to be 
more of a build logic issue.

-sz

On 2019/04/07 18:56:43, Aaron Markham  wrote: 
> +1
> Reduced complexity. Choice of math library... Hopefully you can just
> install MKL and not be forced into mshadow's dependency on OpenBLAS. This
> could make Windows setup easier.
> Maybe this issue will get fixed: #11769.
> 
> On Sun, Apr 7, 2019, 00:51 Junru Shao  wrote:
> 
> > Does merging mshadow into mxnet bring any actual benefit for customers in
> > sense of performance, portability, or anything else?
> >
> > On Fri, Apr 5, 2019 at 9:38 PM Tianqi Chen 
> > wrote:
> >
> > > Technically, mshadow is sufficient for MXNet. Adopting other libraries (
> > > eigen or xtensor) will unnecessarily increase the codebase complexity
> > > without any additional gains.
> > >
> > > Given that mshadow is only used by mxnet. I do support donating it into
> > > mxnet codebase.
> > > To respect the original mshadow community. I would recommend starting a
> > > community RFC In the mshadow github issue for a week, before we start the
> > > migrating process.
> > > Also, I would recommend a rebase merge just like the case of MXNet.jl
> > code
> > > base to preserve the contribution history.
> > >
> > > Tianqi
> > >
> > >
> > > On Fri, Apr 5, 2019 at 9:25 PM Alfredo Luque
> > >  wrote:
> > >
> > > > Do you have a link to both of these proposals?
> > > >
> > > > On Fri, Apr 5, 2019 at 20:14 Anirudh Acharya 
> > > > wrote:
> > > >
> > > > > Hi Pedro,
> > > > >
> > > > > mshadow is mostly used for tensor arithmetic. There have been
> > > discussions
> > > > > about including it within mxnet. I think it is a good idea.
> > > > >
> > > > > As a more long term solution using libraries like eigen to perform
> > > linear
> > > > > algebra operations was also suggested by anirudh2290@. I think
> > > xtensor(
> > > > > https://github.com/QuantStack/xtensor ) can also be a candidate
> > here.
> > > > >
> > > > > -
> > > > > Anirudh
> > > > >
> > > > >
> > > > > On Fri, Apr 5, 2019 at 7:03 PM Pedro Larroy <
> > > > pedro.larroy.li...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi
> > > > > >
> > > > > > Some developers have noticed that working in mshadow is cumbersome
> > as
> > > > > > it's a 3rdparty subrepo.
> > > > > >
> > > > > > Since mshadow is a bunch of headers which don't have much of
> > > > > > independent tests / library functionality, me and other developers
> > > > > > believe that it would be good to assimilate this code in the
> > > > > > repository for ease of contribution and changes without having to
> > go
> > > > > > trough contortions to test PRs that modify mshadow.
> > > > > >
> > > > > > Would anybody oppose this change?
> > > > > >
> > > > > > Thanks and have a nice weekend.
> > > > > >
> > > > > > Pedro.
> > > > > >
> > > > >
> > > >
> > >
> >
> 


Re: assimilation of mshadow into the MXNet codebase

2019-04-07 Thread Sheng Zha
I agree it would make development easier to donate mshadow to mxnet code base, 
since mshadow is only used in MXNet. I support donating the mshadow code to 
mxnet and I started an RFC for this in mshadow [1].

[1] https://github.com/dmlc/mshadow/issues/373

-sz

On 2019/04/06 04:38:19, Tianqi Chen  wrote: 
> Technically, mshadow is sufficient for MXNet. Adopting other libraries (
> eigen or xtensor) will unnecessarily increase the codebase complexity
> without any additional gains.
> 
> Given that mshadow is only used by mxnet. I do support donating it into
> mxnet codebase.
> To respect the original mshadow community. I would recommend starting a
> community RFC In the mshadow github issue for a week, before we start the
> migrating process.
> Also, I would recommend a rebase merge just like the case of MXNet.jl code
> base to preserve the contribution history.
> 
> Tianqi
> 
> 
> On Fri, Apr 5, 2019 at 9:25 PM Alfredo Luque
>  wrote:
> 
> > Do you have a link to both of these proposals?
> >
> > On Fri, Apr 5, 2019 at 20:14 Anirudh Acharya 
> > wrote:
> >
> > > Hi Pedro,
> > >
> > > mshadow is mostly used for tensor arithmetic. There have been discussions
> > > about including it within mxnet. I think it is a good idea.
> > >
> > > As a more long term solution using libraries like eigen to perform linear
> > > algebra operations was also suggested by anirudh2290@. I think xtensor(
> > > https://github.com/QuantStack/xtensor ) can also be a candidate here.
> > >
> > > -
> > > Anirudh
> > >
> > >
> > > On Fri, Apr 5, 2019 at 7:03 PM Pedro Larroy <
> > pedro.larroy.li...@gmail.com>
> > > wrote:
> > >
> > > > Hi
> > > >
> > > > Some developers have noticed that working in mshadow is cumbersome as
> > > > it's a 3rdparty subrepo.
> > > >
> > > > Since mshadow is a bunch of headers which don't have much of
> > > > independent tests / library functionality, me and other developers
> > > > believe that it would be good to assimilate this code in the
> > > > repository for ease of contribution and changes without having to go
> > > > trough contortions to test PRs that modify mshadow.
> > > >
> > > > Would anybody oppose this change?
> > > >
> > > > Thanks and have a nice weekend.
> > > >
> > > > Pedro.
> > > >
> > >
> >
> 


Re: MXNet 1.4.1 Release Proposal

2019-04-04 Thread Sheng Zha
Thanks Hagay for proposing the release and for Junru to volunteer to drive
the release. I will help Junru as the committer for this release.

-sz

On Thu, Apr 4, 2019 at 2:18 PM Junru Shao  wrote:

> Hi Hagay,
>
> I have some experiences in MXNet development, and would love to volunteer
> for driving this release.
>
> Thank you so much!
>
> Best,
> Junru
>
> On Thu, Apr 4, 2019 at 1:51 PM Hagay Lupesko  wrote:
>
> > Hello MXNet community,
> >
> > As previously discussed in [0
> > <
> >
> https://lists.apache.org/thread.html/a5f444999bf428d06e691b1856392ae5ebb24a3485eaa484a73de10d@%3Cdev.mxnet.apache.org%3E
> > >],
> > and per the feedback from Pedro, Kellen and Sheng, I'd like to propose
> > releasing MXNet 1.4.1.
> > MXNet 1.4.1 is a patch release on top of 1.4.0 (following semver[1
> > ]), that includes backwards compatible bug fixes -
> a
> > couple I am aware of are mem leaks in Scala API, Gluon RNN and NDArrays.
> >
> > I went ahead and created a draft release page on CWiki [2
> > <
> >
> https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status
> > >],
> > thanks to Yuxi Hu for adding a mem leak fix, and thanks to Andrew Ayres,
> > Qing Lan and Sergey Sokolov for fixing bugs in 1.4.0 - I went ahead and
> > added your fixes to the list.
> >
> > Asking the community to:
> > (1) Any bug fix or regression you identified and fixed after 1.4.0
> release?
> > please add it to the release proposal wiki (or msg me on Slack if you
> don't
> > have write access, happy to do it).
> > (2) Any comments or suggestions on the release wiki? please leave
> comments
> > on the wiki or reply to this email.
> > (3) I am looking for volunteers to drive the release - ideally we'll have
> > two volunteers: a non-committer and a shepherd committer that can also
> help
> > with the logistics that require permissions. This is a great way to
> > contribute to the community and help MXNet!
> >
> > I plan to check-in in a few days and finalize the proposal, so timely
> > response is appreciated.
> >
> > Cheers,
> > Hagay
> >
> > [0]
> >
> >
> https://lists.apache.org/thread.html/a5f444999bf428d06e691b1856392ae5ebb24a3485eaa484a73de10d@%3Cdev.mxnet.apache.org%3E
> > [1] https://semver.org/
> > [2]
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status
> >
>


[Discussion] 1.5.0 roadmap

2019-04-04 Thread Sheng Zha
Hi all,

In order to coordinate efforts for the next 1.5.0 minor release, let's join the 
discussion here: https://github.com/apache/incubator-mxnet/issues/14619

Once we have some clarity on the items to track for 1.5.0 release from the 
discussion, we will come back to the list and propose a timeline for it. Thanks.

-sz


Re: Discussing plans for next MXNet releases

2019-04-02 Thread Sheng Zha
Hi Hagay,

Thanks for taking the initiative. The proposed scope in this thread is in
my opinion too large to fit in a single thread, so I'd suggest that we
start separate threads for each individual release item. To elaborate on
the reasons based on each individual item:
- For 1.4.1 which is in the wiki page draft, I'd suggest refraining from
adding new features there since patch release should be about bug fixes.
- For 1.5, there are efforts such as AMP and general improvement for fp16
support in operators, quantization efforts, etc., that should be included.
I may have a bit more context on this so I'm happy to help initiate the
discussion.
- For 2.0, I think it would be more of a roadmap discussion at this stage.

I hope this makes sense. Would you mind starting a thread focusing on 1.4.1
patch release?

-sz


On Tue, Apr 2, 2019 at 5:06 PM Hagay Lupesko  wrote:

> Dear MXNet community,
>
> I wanted to initiate a discussion about the plan and scope for the next
> MXNet releases.
> I suggest we focus on three releases, and get the process going in
> parallel:
> (1) 1.4.1 - patch release on top of 1.4.0 to address some perf regressions
> and memory leaks I am aware of, such as the memory leak fixed on Scala [0
> ]. I went ahead and
> created a draft release proposal wiki [1
> <
> https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status
> >
> ].
> (2) 1.5.0 - a minor release to add new features introduced since 1.4.0
> release started (back in Nov 2018!), such as various performance
> improvements: aggregate SGD, in-place updates in optimizers, gpu support
> for image processing operators and many more features useful for MXNet’s
> users.
> (3) 2.0 - an exciting major release that will include major enhancements to
> MXNet.
>
> Timeframes will probably vary based on the scope. I think we should plan to
> start 1.4.1 release within a couple of weeks, 1.5.0 should target starting
> once we release 1.4.1, and 2.0 timeline is TBD - but such a major release
> will require more time to discuss and decide in the community.
>
> I was thinking to get started through:
> (1) Draft proposals on CWiki, where the community can add content and
> propose scope and features.
> (2) Setup online meetings, where anyone can dial into, from anywhere, where
> we will have a chance to discuss in voice+video.
> (3) With (1)+(2) have a scope and timeline that the community, in large,
> supports.
>
> Would be great to get the community's feedback and suggestions, and please
> reply if you would like to be involved in the effort of supporting the
> releases!
>
> MXNet is awesome, looking forward to working together to make it even
> better!
> Hagay
>
> [0] https://github.com/apache/incubator-mxnet/pull/14586
> [1]
>
> https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status
>


Re: Podling Report Reminder - April 2019

2019-04-02 Thread Sheng Zha
Thanks for the reminder. I’m working on it and will post the draft back to the 
list, and would appreciate feedback from the community by then.

-sz

> On Apr 2, 2019, at 5:23 PM, Tianqi Chen  wrote:
> 
> It would be great if the PPMC coordinate and prepare the report
> 
>> On Tue, Apr 2, 2019 at 4:00 PM Hagay Lupesko  wrote:
>> 
>> Is anyone working on the podling report?
>> I'm happy to take care of that if no one else is planning to do it.
>> 
>> Cheers,
>> Hagay
>> 
>>> On Fri, Mar 29, 2019 at 4:06 PM  wrote:
>>> 
>>> Dear podling,
>>> 
>>> This email was sent by an automated system on behalf of the Apache
>>> Incubator PMC. It is an initial reminder to give you plenty of time to
>>> prepare your quarterly board report.
>>> 
>>> The board meeting is scheduled for Wed, 17 April 2019, 10:30 am PDT.
>>> The report for your podling will form a part of the Incubator PMC
>>> report. The Incubator PMC requires your report to be submitted 2 weeks
>>> before the board meeting, to allow sufficient time for review and
>>> submission (Wed, April 03).
>>> 
>>> Please submit your report with sufficient time to allow the Incubator
>>> PMC, and subsequently board members to review and digest. Again, the
>>> very latest you should submit your report is 2 weeks prior to the board
>>> meeting.
>>> 
>>> Candidate names should not be made public before people are actually
>>> elected, so please do not include the names of potential committers or
>>> PPMC members in your report.
>>> 
>>> Thanks,
>>> 
>>> The Apache Incubator PMC
>>> 
>>> Submitting your Report
>>> 
>>> --
>>> 
>>> Your report should contain the following:
>>> 
>>> *   Your project name
>>> *   A brief description of your project, which assumes no knowledge of
>>>the project or necessarily of its field
>>> *   A list of the three most important issues to address in the move
>>>towards graduation.
>>> *   Any issues that the Incubator PMC or ASF Board might wish/need to be
>>>aware of
>>> *   How has the community developed since the last report
>>> *   How has the project developed since the last report.
>>> *   How does the podling rate their own maturity.
>>> 
>>> This should be appended to the Incubator Wiki page at:
>>> 
>>> https://wiki.apache.org/incubator/April2019
>>> 
>>> Note: This is manually populated. You may need to wait a little before
>>> this page is created from a template.
>>> 
>>> Mentors
>>> ---
>>> 
>>> Mentors should review reports for their project(s) and sign them off on
>>> the Incubator wiki page. Signing off reports shows that you are
>>> following the project - projects that are not signed may raise alarms
>>> for the Incubator PMC.
>>> 
>>> Incubator PMC
>>> 
>> 


Re: Call for Ideas and Approaches to Community Building

2019-03-06 Thread Sheng Zha
First, I echo a lot with Steffen’s points on educating users on the usage of 
MXNet and DL, and my team at my day job takes it as its mission. Just to name a 
few related efforts: d2l.ai (a whole book on deep learning with mxnet), the 
numerous tutorials in GluonCV, GluonNLP, DGL toolkits, the public course on 
introduction to deep learning at UC Berkeley by Mu Li and Alex Smola.

Recognizing non-code contribution has also been one of the recent focus of the 
PPMC. In fact, Aston Zhang, who made significant contribution to the community 
by looking after the user forum and writing the d2l book, has just joined us as 
a committer of our community. Kuo Ding, also known as chinakook, the maintainer 
of the Awesome-MXNet list and advocate of MXNet in various social media, has 
also just accepted our invite to become a committer. PPMC will definitely be on 
the watch for more.

In addition, as PPMC member, I’m also interested in other ways to help 
technical contributors to become familiar with the code base. I hope to grow 
the pool of competencies in the committer group in various areas, by helping 
interested community members. A larger pool of competencies makes better 
experience for all contributors in the form of meaningful technical feedbacks 
and code reviews. In my case, I spend half of my time on Github reviewing code, 
and also participating in a wide range of design reviews. I’ve been encouraging 
my committer peers to do the same. From the perspective of an experienced 
committer, I think growth in the pool of technical competencies is sorely 
needed, and I’m definitely interested in manageable ways to reach more people 
and help those with the drive to grow with the community.

Towards meet-ups and hangouts, I have mixed feeling. On one hand, meeting other 
MXNetters is exciting and I’d definitely like to indulge. On the other hand, 
meet-ups tend to make it impossible, or at least time-consuming for people who 
didn’t attend to digest. By spending the same time on carefully writing answers 
to issues, I could easily have helped 10x more people across space and time. 
Meet-ups also encourage off-list decision-making which is a bit concerning to 
me too. These are my personal takes and I thought I should be candid about this 
so that people who do attend can be more conscious about taking the 
conversations back to the list.

In terms of nomination outside one’s own organization, my impression is that 
this comes from the desire of growing the diversity in the community, 
specifically in terms of the day job. So the “organization” refers specifically 
to day job employer. From my perspective, I think all nomination should be 
encouraged, and the day job of a community member doesn’t affect the merit one 
has earned. That said, given that we strive to grow an open and transparent 
community, a community member whose impact is limited only to one’s day job is 
a red flag. The failure to leave any impression outside of the same 
organization is likely a symptom of relying too much on private-channel 
communication or making decision outside the community. If such case arises, I 
think PPMC members can be trusted to recognize it as a problem.

-sz

> On Mar 6, 2019, at 10:03 PM, Steffen Rochel  wrote:
> 
> Thanks Carin to start the discussion on this important topic.
> My suggestions:
> 1) get more involved educating the public about MXNet and how to use for
> DL. Carin's Can You GAN?  is a
> great example and there are many others. Meetups are another good way (DL
> with MXNet 
> has now 10 local chapters with 1842 members in 8 countries, but there are
> still a lot of "white" areas). Recognize contributors like Cosmin
>  and Lai
> 
> .
> 2) Recognize non-code contributors like all the people answering questions
> at the various discussion forums.
> 3) invite people to contribute - github repo has zero issues labelled "Help
> Wanted", 21 out of 993 open issues are labelled as "Good First Issues".
> Recognize people who go through the effort of classification of issues and
> mentor the new contributors
> 4) talk to the "drive by contributors" i.e. people who contribute once and
> then disappear. What is preventing them to contribute more then once?
> 5) be more active communicating and cross-promoting events related to MXNet
> through announcements on dev@, discussion forum and re-tweets.
> 
> I agree with Tianqi on "One approach toward building a more diverse
> community is to acknowledge the fact that we want to encourage interactions
> in the Apache way beyond our physical cycle." However, I disagree with his
> suggestion regarding "One principle to toward that is to encourage PMC
> members only nominate committers from other organizations" 

Re: CI woes pt.2

2019-03-03 Thread Sheng Zha
CI has been down again for several days now. Is there something I can help with?

-sz

On 2019/02/27 14:35:05, Per da Silva  wrote: 
> Hi everyone,
> 
> The PR temporarily disabling windows tests has been merged, so the windows
> checks shouldn't block progress for the time being. Please retrigger any
> PRs you have open. We are still working hard on solving the windows
> instance issues. Once that is sorted out, we'll come back with more news.
> 
> Cheers,
> 
> Per
> 


Re: [DISCUSS] Process to remove deprecated operators

2019-02-27 Thread Sheng Zha
MXNet follows semantic versioning so we will be able to delete them in the
next major release.

-sz

On Wed, Feb 27, 2019 at 8:53 PM Lin Yuan  wrote:

> Dear Community,
>
> In MXNet there are many legacy operators such as this
> <
> http://mxnet.incubator.apache.org/versions/master/api/python/symbol/symbol.html?highlight=convolution_v1#mxnet.symbol.Convolution_v1
> >
> that has been marked DEPRECATE for several releases. However, these
> operators still exist in our code. This caused a few problems:
>
> 1) Make the codebase bloated and reduce readability
> 2) Increase unnecessary maintanence effort
> 3) Bug prone as some people will look up these legacy code as example
> 4) Cause confusion to end users and make documentation page lengthy
>
> I would like to propose the following process (if there is no existing one)
> to remove deprecate operators from our code base.
>
> 1. Documnent the deprecate operators/environment variables in the release
> note as well as man pages.
> 2. Limit the life cycle of deprecate operators/argument to two minor
> release. For example, if one operator is marked deprecate in 1.4 release,
> it will be removed in 1.6 release.
> 3. If there is some concern raised from customers during 1.4 and 1.5
> release, we can convert the deprecated operator back to current and it will
> be treated as new operator.
> 4. PRs that remove deprecate operators should contain [Cleanup] in title.
>
> Any comment is appreciated.
>
> Lin
>


Re: [Design Review Request] Extending model save/load API

2019-02-22 Thread Sheng Zha
Hi Sandeep,

> users need to know input name

This information is already in the files from Gluon export API. Here's how
you get them:
In [1]: import mxnet as mx
   ...: net = mx.gluon.model_zoo.vision.alexnet(pretrained=True)
   ...: net.hybridize()
   ...: net(mx.nd.ones((1,3,224,224)))
   ...: net.export('alexnet')
   ...: net_sym = mx.sym.load('alexnet-symbol.json')
   ...: net_params = mx.nd.load('alexnet-.params')
   ...: print('input is {}'.format([n for n in net_sym.list_inputs() if
'arg:{}'.format(n) not in net_params.keys()]))
   ...: print('output is {}'.format(net_sym.list_outputs()))
   ...:
   ...:
input is ['data']
output is ['alexnet0_dense2_fwd_output']

> shape

It's true that shape needs to be specified before binding. The catch for
coding it into the symbol file is that many networks support variable size
data, such as RNN and CNN. Having static size makes the symbol file
strictly less useful. Also, those who develop the inference code would have
to know the shape at the time of binding because otherwise they won't know
if the data they feed is "legal".

IMO the proposed additional metadata seems to be coupling the model
development and deployment instead of decoupling them.

-sz


On Thu, Feb 21, 2019 at 2:36 PM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> Hi Sheng,
>
> By stating unusable out of the box, I meant, when loading a model for
> inference, today, users need to know input name, shape for binding. This
> information if part of the model, could have simplified model training to
> model deployment path much easier and decoupled.
>
> Best,
> Sandeep
>
> On Thu, Feb 21, 2019 at 2:10 PM Sheng Zha  wrote:
>
> > Hi Sandeep,
> >
> > In the design doc, you stated that
> > > Input/Output signature are not part of the model: Saved model missing
> the
> > information about the input/output descriptions, like name/shape, making
> > the saved model unusable out of the box.
> >
> > Could you elaborate why you think this is the case?
> >
> > -sz
> >
> > On Thu, Feb 21, 2019 at 11:11 AM sandeep krishnamurthy <
> > sandeep.krishn...@gmail.com> wrote:
> >
> > > Hello MXNet community,
> > >
> > > I have put up a design proposal to extend the MXNet model saving and
> > > loading APIs to solve the following 2 problems:
> > > 1. Currently, model export/save APIs do not allow you to export the
> data
> > > transformations as part of the model. By allowing this, we greatly
> > simplify
> > > the user experience during inference across all language bindings and
> > also
> > > we see a good performance gains.
> > > 2. Currently, model export/save APIs do not save input/output signature
> > > like input data names, shapes etc.. By saving it as part of the model,
> we
> > > simplify again the inference and model deployment logic.
> > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103090063
> > >
> > > Thanks to our fellow community members (Zhi, Naveen, Pinar and Jake)
> for
> > > their initial feedback. I request the community to review and provide
> > > feedback on the proposal.
> > >
> > > --
> > > Sandeep Krishnamurthy
> > >
> >
>
>
> --
> Sandeep Krishnamurthy
>


Re: [Design Review Request] Extending model save/load API

2019-02-21 Thread Sheng Zha
Hi Sandeep,

In the design doc, you stated that
> Input/Output signature are not part of the model: Saved model missing the
information about the input/output descriptions, like name/shape, making
the saved model unusable out of the box.

Could you elaborate why you think this is the case?

-sz

On Thu, Feb 21, 2019 at 11:11 AM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> Hello MXNet community,
>
> I have put up a design proposal to extend the MXNet model saving and
> loading APIs to solve the following 2 problems:
> 1. Currently, model export/save APIs do not allow you to export the data
> transformations as part of the model. By allowing this, we greatly simplify
> the user experience during inference across all language bindings and also
> we see a good performance gains.
> 2. Currently, model export/save APIs do not save input/output signature
> like input data names, shapes etc.. By saving it as part of the model, we
> simplify again the inference and model deployment logic.
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103090063
>
> Thanks to our fellow community members (Zhi, Naveen, Pinar and Jake) for
> their initial feedback. I request the community to review and provide
> feedback on the proposal.
>
> --
> Sandeep Krishnamurthy
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc3

2019-02-19 Thread Sheng Zha
-[Y] Are release files in correct location?
-[Y] Do release files have the word incubating in their name?
-[Y] Are the digital signature and hashes correct?
-[Y] Does DISCLAIMER file exist?
-[Y] Do LICENSE and NOTICE files exists?
-[Y] Is the LICENSE and NOTICE text correct?
-[N] Is the NOTICE year correct?
-[Y] Un-included software dependencies are not mentioned in LICENSE or NOTICE? 
(sz: did not finish checking)
-[Y] License information is not mentioned in NOTICE?
Is there any 3rd party code contained inside the release? If so:
-[Y] Does the software have a compatible license?
-[Y] Are all software licenses mentioned in LICENSE?
-[Y] Is the full text of the licenses (or pointers to it) in LICENSE?
Is any of this code Apache licensed? Do they have NOTICE files? If so:
-[Y] Have relevant parts of those NOTICE files been added to this NOTICE
file?
-[Y] Do all source files have ASF headers?
-[Y] Do the contents of the release match with what's tagged in version control?
-[N] Are there any unexpected binary files in the release?
-[Y] Can you compile from source? Are the instruction clear?

+1 with the caveat:
- NOTICE year was fixed on master but not on the release candidate. rc3 still 
reads "2017-2018"

-sz

On 2019/02/19 00:19:52, Roshani Nagmote  wrote: 
> +1 Downloaded, installed on Ubuntu 16.04. Verified signatures.
> Built from source with cuda enabled. Ran train_mnist.py test successfully.
> 
> Thanks,
> Roshani
> 
> On Sun, Feb 17, 2019 at 12:13 PM Carin Meier  wrote:
> 
> > +1 Downloaded and verified the signature on the tar. Built and tested the
> > Scala/Clojure package
> >
> > On Sun, Feb 17, 2019 at 2:13 PM Qing Lan  wrote:
> >
> > > +1 (binding) on the release. Checked Mac + Linux (Ubuntu 16.04) build
> > from
> > > source successfully. Checked Scala build with no errors.
> > >
> > > On 2/15/19, 6:08 PM, "Piyush Ghai"  wrote:
> > >
> > > Dear MXNet community,
> > >
> > > I would like to propose a vote to release Apache MXNet (incubating)
> > > version v1.4.0.
> > > Voting will start today, Friday February 15th 6pm PST and will close
> > > on Monday,
> > > February 18th 6pm PST.
> > >
> > > Link to release notes:
> > >
> > >
> > >
> > https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.0+Release+Notes
> > > <
> > >
> > https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+(incubating)+1.4.0+Release+Notes
> > > >
> > >
> > > Link to release candidate 1.4.0.rc3:
> > >  
> > > https://github.com/apache/incubator-mxnet/releases/tag/1.4.0.rc3 <
> > > https://github.com/apache/incubator-mxnet/releases/tag/1.4.0.rc3>/
> > >
> > > Link to source and signatures on apache dist server:
> > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.0.rc3/ <
> > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.0.rc3/>
> > >
> > >
> > > Please remember to TEST first before voting accordingly:
> > > +1 = approve
> > > +0 = no opinion
> > > -1 = disapprove (provide reason)
> > >
> > >
> > > Best regards,
> > > Piyush
> > >
> > >
> >
> 


[Announcement] New Committer - Kan Wu (@wkcn)

2019-02-18 Thread Sheng Zha
Hi,

Please join me in welcoming Kan Wu (@wkcn), as a new committer!

Kan has brought many valuable contributions to MXNet [1]. He also enriches
the MXNet ecosystem with his operator toolkit MobulaOP.

We are excited to have Kan join us as a committer.

-sz

[1]
https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+author%3Awkcn+
[2] https://github.com/wkcn/MobulaOP


Re: Rust Client Lib

2019-02-18 Thread Sheng Zha
Hi,

Thanks for sharing the results. A problem in the benchmark is that the 
comparison does not take into account that MXNet is making a copy while pytorch 
is not.

MXNet made the choice of not doing a zero-copy for numpy arrays, but instead 
making a copy of the numpy data. This means that users are free to change the 
numpy array after passing it into MXNet. On the other hand, PyTorch chose not 
to make a copy, by keeping the array alive through incrementing the reference 
count and then reuse the data pointer.

This also explains why pytorch fp16 is this much worse than fp32 in your 
results (`.half()` has to make a copy).

If you control for that factor, you will find MXNet to be 50%-100% faster on 
your workload. I shared the results in your gist comments [1]. Feel free to let 
me know if you have questions.

-sz

[1] 
https://gist.github.com/SunDoge/59a8ff336703b45be30b46dc3ee8b4ab#gistcomment-2841120

On 2019/02/19 02:33:20, epsund...@gmail.com  wrote: 
> I wrote some benchmark code, and here's the discussion:
> https://discuss.mxnet.io/t/hybrid-training-speed-is-20-slower-than-pytorch/2731/3
> 
> There's another discussion here:
> https://discuss.mxnet.io/t/performance-of-symbol-vs-ndarray-vs-pytorch/870/6
> 
> I slightly modify it:
> https://gist.github.com/SunDoge/59a8ff336703b45be30b46dc3ee8b4ab
> 
> 
> On 2019/02/18 19:26:27, Edison Gustavo Muenz  wrote: 
> > Hello!
> > 
> > > mxnet is somehow slower than pytorch, even with hybridize on, and that's
> > why I start writing binding for pytorch now.
> > 
> > I believe many people in this list will be very interested in why you say
> > this.
> > 
> > As far as I know, and correct me if I'm wrong, MXNet is supposed to be a
> > very fast, if not the fastest, dl framework. I mean in raw performance
> > numbers.
> > 
> > Would you mind expanding on what you mean? I'm genuinely interested.
> > 
> > Best,
> > Edison Gustavo Muenz
> > 
> > On Mon 18. Feb 2019 at 17:28, epsund...@gmail.com 
> > wrote:
> > 
> > > The rust crate for tensorflow support only inference, which limit its
> > > usage. If you really want to deploy your network, TensorRT and TVM may be
> > > better choice.
> > >
> > > I really want to write a dl framework in rust from scratch. However,
> > > there's no mature GPU Tensor library in rust (rust-ndarray is a great 
> > > crate
> > > but it only support CPU. arrayfire may support ND array in the future,
> > > which is a good candidate). So I have to write bindings for existing
> > > project, which is much easier. .The benefit is that I can safely wrap 
> > > those
> > > unsafe C pointer, and with the help of generic, I can manipulate data with
> > > ndarray in a type-safe way.
> > >
> > > The only difficulty is that I'm a postgraduate and I'm pretty sure my boss
> > > won't be happy to see me writing rust code instead of doing research.
> > > Besides, mxnet is somehow slower than pytorch, even with hybridize on, and
> > > that's why I start writing binding for pytorch now.
> > >
> > > On 2019/02/09 01:35:04, Zach Boldyga  wrote:
> > > > I did some homework and stumbled across something that changed my view 
> > > > of
> > > > where machine learning libraries are headed:
> > > >
> > > >
> > > https://github.com/tensorflow/swift/blob/master/docs/WhySwiftForTensorFlow.md
> > > >
> > > > Google & Apple are building first-class support for Tensorflow right 
> > > > into
> > > > the Swift language. They chose Swift very carefully, and while they 
> > > > noted
> > > > Rust is a great choice for lots of reasons, the learning curve of the
> > > > language is too steep... It seems like Rust isn't going to get much love
> > > > from the ML community in the places that matter.
> > > >
> > > > I also see that as of writing this, the Rust crate for Tensorflow has
> > > only
> > > > ~10,000 lifetime downloads, which is pretty low considering how much
> > > effort
> > > > the client library required. So the existing set of practitioners in the
> > > > language is very small, and it's unlikely to grow.
> > > >
> > > > Also, the benefits of Rust memory safety and ownership won't really be
> > > > realized via a client library that uses FFI on a C API.
> > > >
> > > > I'm not going to move forward with this client lib. I'll check back here
> > > in
> > > > the future and see if there's any activity... In the meantime, if 
> > > > someone
> > > > stumbles across this in the future and wants to pick it up, don't let me
> > > > stand in the way!
> > > >
> > > > - Zach
> > > >
> > > >
> > > > On Wed, Jan 30, 2019 at 11:16 PM Zach Boldyga 
> > > wrote:
> > > >
> > > > > Rad, thanks for the input everyone!
> > > > >
> > > > > I'm anticipating some friction with using FFI with the C API since 
> > > > > it's
> > > > > considered unsafe in Rust; difficulty of integrating will depend on 
> > > > > the
> > > > > nuances of the C API as HY mentioned...
> > > > >
> > > > > Going to go ahead and dive in. Will be back eventually for feedback /
> > > > > input!
> > > 

Re: [RESTARTING][VOTE] Release Apache MXNet (incubating) version 1.4.0.rc2

2019-02-13 Thread Sheng Zha
Thanks for making me aware of the issue. I started the fix here [1].

And thanks to Qing Lan and Zach Kimberg for pinging me and helping with 
isolating the problem.

-sz

[1] https://github.com/apache/incubator-mxnet/pull/14148

On 2019/02/13 19:45:41, Aaron Markham  wrote: 
> Sheng, thanks for being so proactive, but adding license headers to
> the markdown files in #14142 breaks the website as I warned. I caught
> it before it went live.
> I've disabled website publishing until this situation is resolved.
> 
> 
> On Wed, Feb 13, 2019 at 10:59 AM Sheng Zha  wrote:
> >
> > Update: All license issues mentioned in the general vote from Luciano (pom
> > files, docker files, docs) have been fixed on master [1][2].
> >
> > Let me know if there's more to address.
> >
> > -sz
> >
> > [1] https://github.com/apache/incubator-mxnet/pull/14138
> > [2] https://github.com/apache/incubator-mxnet/pull/14142
> >
> > On Wed, Feb 13, 2019 at 7:54 AM Michael Wall  wrote:
> >
> > > So is the plan option 3?  I have seen tickets fixing licenses, so good 
> > > work
> > > there.  When a vote is started on dev@mxnet.a.o, include wording about not
> > > waiting the full 72 hours since this is just updating licensing.  Get as
> > > many +1 votes as you can on both the release and not waiting then move on
> > > to IPMC.  The vote on general@incubator.a.o should still stay open 72
> > > hours.  I will look at it as soon as it is posted, but maybe reach out to
> > > the other mentors directly asking for their help to review as soon as it 
> > > is
> > > out.  The goal is to have the 3 or more +1 votes and more positive then
> > > negative as soon as the 72 hours hits.
> > >
> > > Mike
> > >
> > > On Wed, Feb 13, 2019 at 2:44 AM Justin Mclean 
> > > wrote:
> > >
> > > > forgot to CC dev
> > > >
> > > > > Begin forwarded message:
> > > > >
> > > > > From: Justin Mclean 
> > > > > Subject: Re: [RESTARTING][VOTE] Release Apache MXNet (incubating)
> > > > version 1.4.0.rc2
> > > > > Date: 13 February 2019 at 6:43:48 pm AEDT
> > > > > To: Michael Wall 
> > > > >
> > > > > Hi,
> > > > >
> > > > >> Option 1:
> > > > >> Do nothing.  I don't know how a RESTARTED vote works.
> > > > >
> > > > > I don’t believe there is such a concept.
> > > > >
> > > > >> Option 2:
> > > > >> Start another vote thread on general@incubator.a.o pointing to the
> > > > original vote thread on dev@mxnet.a.o and the canceled vote thread.
> > > > >
> > > > > It may end up with the same outcome.
> > > > >
> > > > >> Option 3:
> > > > >> 1 - Fix the header issues.
> > > > > 
> > > > >> 3 - Start a vote thread on general@incubator.a.o pointing to the new
> > > > vote thread from step 2.  Will likely need to be open 72 hours.
> > > > >
> > > > > Just be aware it can take longer, sometime much longer, to get the 3 
> > > > > +1
> > > > IPMC votes.
> > > > >
> > > > >> Tough position to be in with Horovod being released.
> > > > >
> > > > > Which show the risk of tying in your release cycle with a non Apache
> > > > product. IMO you need to be independent of 3rd party releases and not
> > > tied
> > > > to their milestones. If they wanted to include a particular unreleased
> > > > version of ASF software, you should started the release a long time 
> > > > ahead
> > > > of time just in case problems were encountered issues.This probably
> > > > wouldn't be an issue if you made more frequent releases, it’s easier to
> > > > check compliance with frequent releases so the 3rd party could just take
> > > > the last good release and go with that.
> > > > >
> > > > > Thanks,
> > > > > Justin
> > > >
> > > >
> > >
> 


Re: [RESTARTING][VOTE] Release Apache MXNet (incubating) version 1.4.0.rc2

2019-02-13 Thread Sheng Zha
Update: All license issues mentioned in the general vote from Luciano (pom
files, docker files, docs) have been fixed on master [1][2].

Let me know if there's more to address.

-sz

[1] https://github.com/apache/incubator-mxnet/pull/14138
[2] https://github.com/apache/incubator-mxnet/pull/14142

On Wed, Feb 13, 2019 at 7:54 AM Michael Wall  wrote:

> So is the plan option 3?  I have seen tickets fixing licenses, so good work
> there.  When a vote is started on dev@mxnet.a.o, include wording about not
> waiting the full 72 hours since this is just updating licensing.  Get as
> many +1 votes as you can on both the release and not waiting then move on
> to IPMC.  The vote on general@incubator.a.o should still stay open 72
> hours.  I will look at it as soon as it is posted, but maybe reach out to
> the other mentors directly asking for their help to review as soon as it is
> out.  The goal is to have the 3 or more +1 votes and more positive then
> negative as soon as the 72 hours hits.
>
> Mike
>
> On Wed, Feb 13, 2019 at 2:44 AM Justin Mclean 
> wrote:
>
> > forgot to CC dev
> >
> > > Begin forwarded message:
> > >
> > > From: Justin Mclean 
> > > Subject: Re: [RESTARTING][VOTE] Release Apache MXNet (incubating)
> > version 1.4.0.rc2
> > > Date: 13 February 2019 at 6:43:48 pm AEDT
> > > To: Michael Wall 
> > >
> > > Hi,
> > >
> > >> Option 1:
> > >> Do nothing.  I don't know how a RESTARTED vote works.
> > >
> > > I don’t believe there is such a concept.
> > >
> > >> Option 2:
> > >> Start another vote thread on general@incubator.a.o pointing to the
> > original vote thread on dev@mxnet.a.o and the canceled vote thread.
> > >
> > > It may end up with the same outcome.
> > >
> > >> Option 3:
> > >> 1 - Fix the header issues.
> > > 
> > >> 3 - Start a vote thread on general@incubator.a.o pointing to the new
> > vote thread from step 2.  Will likely need to be open 72 hours.
> > >
> > > Just be aware it can take longer, sometime much longer, to get the 3 +1
> > IPMC votes.
> > >
> > >> Tough position to be in with Horovod being released.
> > >
> > > Which show the risk of tying in your release cycle with a non Apache
> > product. IMO you need to be independent of 3rd party releases and not
> tied
> > to their milestones. If they wanted to include a particular unreleased
> > version of ASF software, you should started the release a long time ahead
> > of time just in case problems were encountered issues.This probably
> > wouldn't be an issue if you made more frequent releases, it’s easier to
> > check compliance with frequent releases so the 3rd party could just take
> > the last good release and go with that.
> > >
> > > Thanks,
> > > Justin
> >
> >
>


Re: [RESTARTING][VOTE] Release Apache MXNet (incubating) version 1.4.0.rc2

2019-02-12 Thread Sheng Zha
Thanks for the detailed explanation and the help on educating the community, 
Michael.

People on the general list are spending time to help us get the licensing 
right. If possible, I think we should be thankful by treating their feedbacks 
more seriously, making the efforts to quickly fix the problem, and getting our 
release out when ready. Fixes for the issues found during the release are 
already going in as we speak [1][2][3].

One thing that the community can benefit from is the clarity on what file types 
we should remove from the rat-excludes file that we have [4], so that we make 
the project compliant with the release policy once for all.

-sz

[1] https://github.com/apache/incubator-mxnet/pull/14138
[2] https://github.com/apache/incubator-mxnet/pull/14141
[3] https://github.com/apache/incubator-mxnet/pull/14043
[4] 
https://github.com/apache/incubator-mxnet/blob/master/tests/nightly/apache_rat_license_check/rat-excludes

On 2019/02/13 01:14:07, Michael Wall  wrote: 
> Hi Qing,
> 
> I see 3 options
> 
> Option 1:
> Do nothing.  I don't know how a RESTARTED vote works.  Steffen counted the
> binding votes from the before it was restarted.  Unsure if that actually
> works.  There has been one +1 votes since the restart, but it is
> non-binding as best I can tell even though it labeled as binding.  To be a
> binding vote for the general@incubator.a.o VOTE you must be on the
> Incubator PMC or IPMC.  Users on the MXNet Podling PMC or PPMC have a
> binding vote only on the dev@mxnet VOTE thread.   See
> https://incubator.apache.org/policy/incubation.html#releases.  In addition,
> those binding +1 votes may need to be changes based on
> http://www.apache.org/legal/release-policy.html#release-approval which reads
> 
> "Before casting +1 binding votes, individuals are REQUIRED to download all
> signed source code packages onto their own hardware, verify that they meet
> all requirements of ASF policy on releases as described below, validate all
> cryptographic signatures, compile as provided, and test the result on their
> own platform."
> 
> Luciano's -1 was because the release does not meet the licensing policy at
> http://www.apache.org/legal/release-policy.html#license-headers
> 
> For this reason, I can not give a +1 on the general@incubator.a.o VOTE
> thread.  Sorry, that is why I have not voted.
> 
> Option 2:
> Start another vote thread on general@incubator.a.o pointing to the original
> vote thread on dev@mxnet.a.o and the canceled vote thread.  Likely that
> need to be open for 72 hours unless the IPMC agrees otherwise.  I list this
> because I don't know if a RESTART recounting votes from a prior thread is
> valid.  But this option has the same risk of not being approved for the
> reasons listed above.
> 
> Option 3:
> 1 - Fix the header issues.  I dug a little more, and the excludes file at
> https://github.com/apache/incubator-mxnet/blob/v1.4.x/tests/nightly/apache_rat_license_check/rat-excludes
> is
> overly broad and excludes files from the check that should have license
> headers, again per
> http://www.apache.org/legal/release-policy.html#license-headers
> 2 - Start a vote thread on dev@mxnet.a.o.  Doesn't have to be open 72 hours
> according to Justin's note if the PPMC agrees.  Expect this would need to
> be documented on the mailing list, but could be part of the vote I think.
> 3 - Start a vote thread on general@incubator.a.o pointing to the new vote
> thread from step 2.  Will likely need to be open 72 hours.
> 
> Clearly option 1 would be faster, but the risk is the vote not passing.
> Option 2 may not be needed if the restart in option 1 is valid.  Option 3
> is the most correct I think according to what I read in ASF policy.  But
> rushing a vote does have risks, such as less testing on the code being
> released.
> 
> To make this more confusing, the VOTE thread is showing up on both
> dev@mxnet.a.o and general@incubator.a.o.  There is an additional +1 vote on
> the dev@mxnet.a.o list that doesn't show up on the general@incubator, but
> this too is non binding best I can tell.
> 
> Tough position to be in with Horovod being released.  Nothing in ASF policy
> makes allowances for such an event that I could find.  Perhaps we should
> ask for more clarification on general@incubator.a.o to get more thoughts
> from the IPMC.
> 
> Mike
> 
> On Tue, Feb 12, 2019 at 5:53 PM Qing Lan  wrote:
> 
> > Hi Michael,
> >
> > Could you please guide how to proceed with this? Given that we have a
> > possibility of announcing MXNet support in Horovod with their next release
> > and this would help MXNet increase our visibility.
> >
> > Thanks,
> > Qing
> >
> > On 2/12/19, 2:16 PM, "Michael Wall"  wrote:
> >
> > Team,
> >
> > Here is my read on the situation.  The vote has been canceled.
> > Justin's
> > point was that a -1 doesn't mean you must cancel a vote for the
> > reasons he
> > outlined.  But here the vote needs to be restarted and the issue
> > Luciano
> > found needs 

Re: libjpegturbo

2019-02-12 Thread Sheng Zha
MXNet pip statically links with libturbojpeg that's built from source, not from 
debian package. The script for linux and mac can be found here: 
https://github.com/apache/incubator-mxnet/blob/master/tools/dependencies/libturbojpeg.sh#L22

-sz

On 2019/02/12 07:46:30, Per da Silva  wrote: 
> Hello everyone,
> 
> I was wondering if there was any particular reason why we are building and
> testing mxnet with USE_LIBJPEG_TURBO=0. I noticed that we are shipping it
> with USE_LIBJPEG_TURBO=1 (eg. make/pip/pip_linux_cpu.mk).
> 
> I ran into issues trying to compile mxnet with the libjpegturbo flag on
> Ubuntu 16.04 (I was wondering if this was the reason). This came from an
> issue with libturbojpeg-dev package. There is a fix described on [1]. I've
> applied it in a PR, which I'm currently testing [2].
> 
> Cheers,
> 
> Per
> 
> [1] https://github.com/HaxeFoundation/hashlink/issues/147
> [2] https://github.com/apache/incubator-mxnet/pull/14127
> 


Re: RE: Third-party package tests for MXNet nightly builds

2019-02-11 Thread Sheng Zha
Thanks for the proposal, Felix. On one hand, I agree that richer workload from 
the ecosystem helps find issues in MXNet early. On the other hand, I'm 
concerned about tightly coupling the development of projects.

Monitoring the upstream library and addressing problems for upgrading 
dependency should be the concern of the downstream projects. These projects own 
the effort of having proper testing for any changes needed, including version 
upgrade. Having these projects in MXNet CI means the responsibiliy of 
maintaining these projects partly transfers to the MXNet's contributors, which 
doesn't seem right. It blurs the line of who's responsible for debugging, 
isolating the problem, making minimum reproducible sample code, and posting the 
fix.

That said, I think there's much opportunity for reusing the current code for 
MXNet CI. Projects in MXNet's ecosystem would likely benefit from MXNet's CI 
solution so that each individual community project can identify issues early. 
(And from offline chats with Chance and his team members, I think this is 
what's already on their minds.)

-sz

On 2019/02/11 16:46:06, "Zhao, Patric"  wrote: 
> Agree to track the 3rd party packages which make MXNet more prosperous :)
> 
> Before building the CI, I suggest to create the related labels, like sockeye, 
> gluonCV, gluonNLP, etc, in the GitHub and give the high priority for these 
> issues/PR.
> So the issue/PR can be fixed quickly and  these important applications would 
> not be blocked again.
> 
> We can help for the performance/backend/operator related issues as well :)
> 
> Thanks,
> 
> --Patric 
> 
> 
> 
> > -Original Message-
> > From: Chance Bair [mailto:chanceb...@gmail.com]
> > Sent: Monday, February 11, 2019 11:28 PM
> > To: dev@mxnet.incubator.apache.org
> > Cc: d...@mxnet.apache.org
> > Subject: Re: Third-party package tests for MXNet nightly builds
> > 
> > Hi Felix,
> > 
> > Thank you for the request!  The CI team is currently working on improving
> > our benchmarking platform and will evaluate this request carefully.
> > 
> > Chance Bair
> > 
> > 
> > 
> > On Mon, Feb 11, 2019 at 3:59 PM Carin Meier 
> > wrote:
> > 
> > > Can't speak for the CI team, but in general I think that it is good idea.
> > >
> > > On a separate note, I've been playing around with Sockeye recently and
> > > it's great! Awesome work and glad to see MXNet used for such cutting
> > > edge use cases.
> > > I'd love to see closer collaboration with the Sockeye team and MXNet
> > > for innovation, cross pollination, and evangelization of what MXNet can
> > do .
> > >
> > > Best,
> > > Carin
> > >
> > > On Mon, Feb 11, 2019 at 6:01 AM Felix Hieber 
> > > wrote:
> > >
> > > > Hello dev@,
> > > >
> > > >
> > > >
> > > > I would like to ask around whether there is interest in the
> > > > community to test nightly builds of MXNet with third-party packages
> > > > that depend on
> > > MXNet
> > > > and act as early adopters. The goal is to catch regressions in MXNet
> > > early,
> > > > allowing time for bug fixes before a new release is cut.
> > > >
> > > >
> > > >
> > > > For example, Sockeye  is a
> > > > customer
> > > of
> > > > new MXNet releases and aims to upgrade to latest MXNet as soon as
> > > possible.
> > > > Typically, we update our dependency on MXNet once a new release
> > > > becomes available (through pip). However, there have been cases
> > > > where new
> > > releases
> > > > of MXNet introduced regressions undetected by MXNet tests (hence
> > > > passing the release process): the latest example is this issue
> > > > , which may
> > > > have been introduced already back in October, but, due to infrequent
> > > > MXNet releases, has only surfaced recently and will most likely
> > > > force us to
> > > wait
> > > > for a post or 1.4.1 release. In this particular example, Sockeye’s
> > > > tests would have detected this, and the issue could have been
> > > > created already
> > > in
> > > > October, potentially avoiding its presence in the 1.4.0 release.
> > > >
> > > >
> > > >
> > > > More generally, I think there are several third-party packages with
> > > > valuable test suites (e.g. gluon-nlp) that can contribute to
> > > > catching
> > > MXNet
> > > > regressions or incompatibilities early. Running these test suites
> > > > for
> > > each
> > > > and every PR or commit on the MXNet main repo would be too much
> > overhead.
> > > > My proposal would be to trigger these tests with the nightly builds
> > > > (pip
> > > > releases) of MXNet in a separate CI pipeline that is able to notify
> > > > the
> > > 3p
> > > > maintainers in a case of failure, but does not block MXNet
> > > > development
> > > (or
> > > > nightly build releases) in any way.
> > > >
> > > > Roughly it would do the following:
> > > >
> > > >- pip install mxnet--
> > > >- for each 3p package that is part of the pipeline:
> > > >   - 

Re: [RESULTS][VOTE] Release Apache MXNet (incubating) version 1.4.0.rc2

2019-02-11 Thread Sheng Zha
Update on the issue 1. and 4.:
For 1., I fixed the notice year in master branch [1]. If we are to create a new 
rc, the fix should be cherry-picked.
For 4., MKLDNN has found the issue [2] and posted the fix in their master 
branch. I'm requesting that the fix be backported for the minor version 0.17 
that mxnet 1.4 is using.

-sz

[1] https://github.com/apache/incubator-mxnet/pull/14043
[2] https://github.com/intel/mkl-dnn/issues/405#issuecomment-462400456

On 2019/02/05 04:41:32, Steffen Rochel  wrote: 
> Dear MXNet community -
> the result of the vote to release Apache MXNet (incubating) version
> 1.4.0.rc2 are as follows:
> Binding:
> +1  three (Carin, Indhu, Haibin)
> +0  one (Sheng)
> -0   one (Anirudh)
> -1   none
> 
> Non-binding:
> +1  six   (Yuxi, Aston, Kellen, Aaron, Tao, Lin)
> 0 none
> -1 none
> 
> Voting thread:
> 
> https://lists.apache.org/thread.html/5d4aa084e51e9be919d62bfd0e6d625f37318624124a033a5c48507c@%3Cdev.mxnet.apache.org%3E
> 
> 
> The following issues have been raised with v1.4.0.rc2:
> 1. NOTICE year is wrong (2018): Not considered a stopping issue as release
> was started in 2018.
> 2. TVM NOTICE missing - TVM NOTICE file was added post the commit ID used
> in MXNet v1.4.0.rc2 release, not considered a stopping issue
> 3. build with make passes, but build with cmake failed in
> 3rdparty/dmlc-core/test/unittest
> 4. Recent MKLDNN upgrade prevents us from offering binary distribution for
> earlier versions before OSX 10.13.
> 
> The vote results meet the release voting criteria as defined at
> https://www.apache.org/foundation/voting.html#ReleaseVotes: 3 +1 binding
> votes, no -1, more positive then negative votes.
> I'm not sure there is a difference between -0 and +0 votes, but even if
> there is a difference there are more positive vs. negative votes.
> 
> I do consider the issues raised not as show stoppers to move forward with
> the release. I do suggest to get these issues addressed in the next release
> or with a patch on version 1.4.0.
> To give everybody a chance to way into my decision as release manger, I
> will wait until Wednesday 9am PST (about 36h from now) before starting vote
> on general list.
> Please speak up asap if you think the release cannot move forward as is and
> provide justification.
> 
> Regards,
> Steffen
> 


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc2

2019-02-04 Thread Sheng Zha
; > Yuxi)
> > > > who
> > > > > tested and provided feedback - we have five +1 votes.
> > > > > As of today, Friday Feb 1st 2019 6pm PST we have two binding votes,
> > one
> > > > +1
> > > > > (Carin), one +0 (Sheng). The vote continues be open waiting for
> > > feedback
> > > > > from PMC members.
> > > > > Hope you can spare some time over the weekend to provide feedback.
> > > > >
> > > > > Regards,
> > > > > Steffen
> > > > >
> > > > > On Fri, Feb 1, 2019 at 12:44 AM Marco de Abreu <
> > > marco.g.ab...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Considering the release process has been started last year and
> the
> > > code
> > > > > tag
> > > > > > has also been based on last year, I'd say that it is not really a
> > big
> > > > > deal.
> > > > > >
> > > > > > -Marco
> > > > > >
> > > > > > Am Fr., 1. Feb. 2019, 09:33 hat Sheng Zha 
> > > > > > geschrieben:
> > > > > >
> > > > > > > I found an awesome checklist for incubator releases [1] so I'm
> > > using
> > > > it
> > > > > > > here:
> > > > > > >
> > > > > > > -[Y] Are release files in correct location?
> > > > > > > -[Y] Do release files have the word incubating in their name?
> > > > > > > -[Y] Are the digital signature and hashes correct?
> > > > > > > -[Y] Does DISCLAIMER file exist?
> > > > > > > -[Y] Do LICENSE and NOTICE files exists?
> > > > > > > -[N/A] Is the LICENSE and NOTICE text correct? (sz: did not
> > finish
> > > > > > > checking)
> > > > > > > -[N] Is the NOTICE year correct?
> > > > > > > -[N/A] Un-included software dependencies are not mentioned in
> > > LICENSE
> > > > > or
> > > > > > > NOTICE? (sz: did not finish checking)
> > > > > > > -[Y] License information is not mentioned in NOTICE?
> > > > > > > Is there any 3rd party code contained inside the release? If
> so:
> > > > > > > -[Y] Does the software have a compatible license?
> > > > > > > -[Y] Are all software licenses mentioned in LICENSE?
> > > > > > > -[Y] Is the full text of the licenses (or pointers to it) in
> > > LICENSE?
> > > > > > > Is any of this code Apache licensed? Do they have NOTICE files?
> > If
> > > > so:
> > > > > > > -[N] Have relevant parts of those NOTICE files been added to
> this
> > > > > NOTICE
> > > > > > > file?
> > > > > > > TVM has Apache 2.0 license and its NOTICE hasn't been added to
> > > > MXNet's
> > > > > > > NOTICE file.
> > > > > > > -[Y] Do all source files have ASF headers? (sz: enforced by
> > license
> > > > > > > checker)
> > > > > > > -[Y] Do the contents of the release match with what's tagged in
> > > > version
> > > > > > > control?
> > > > > > > -[N] Are there any unexpected binary files in the release?
> > > > > > > -[Y] Can you compile from source? Are the instruction clear?
> > > > > > >
> > > > > > > Is the issue minor?
> > > > > > > - Unsure. NOTICE year is wrong (it's 2019 now). TVM's NOTICE is
> > > > missing
> > > > > > > from MXNet's NOTICE file.
> > > > > > > Could it possibly be fixed in the next release?
> > > > > > > - Yes
> > > > > > > I vote with:
> > > > > > > +0 not sure if it should be released. Could mentors advise if
> we
> > > > should
> > > > > > fix
> > > > > > > them before release?
> > > > > > >
> > > > > > > [1]
> https://wiki.apache.org/incubator/IncubatorReleaseChecklist
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Jan 31, 2019 at 10:56 PM Lv, Tao A  >
> > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > 

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc2

2019-02-01 Thread Sheng Zha
I found an awesome checklist for incubator releases [1] so I'm using it
here:

-[Y] Are release files in correct location?
-[Y] Do release files have the word incubating in their name?
-[Y] Are the digital signature and hashes correct?
-[Y] Does DISCLAIMER file exist?
-[Y] Do LICENSE and NOTICE files exists?
-[N/A] Is the LICENSE and NOTICE text correct? (sz: did not finish checking)
-[N] Is the NOTICE year correct?
-[N/A] Un-included software dependencies are not mentioned in LICENSE or
NOTICE? (sz: did not finish checking)
-[Y] License information is not mentioned in NOTICE?
Is there any 3rd party code contained inside the release? If so:
-[Y] Does the software have a compatible license?
-[Y] Are all software licenses mentioned in LICENSE?
-[Y] Is the full text of the licenses (or pointers to it) in LICENSE?
Is any of this code Apache licensed? Do they have NOTICE files? If so:
-[N] Have relevant parts of those NOTICE files been added to this NOTICE
file?
TVM has Apache 2.0 license and its NOTICE hasn't been added to MXNet's
NOTICE file.
-[Y] Do all source files have ASF headers? (sz: enforced by license checker)
-[Y] Do the contents of the release match with what's tagged in version
control?
-[N] Are there any unexpected binary files in the release?
-[Y] Can you compile from source? Are the instruction clear?

Is the issue minor?
- Unsure. NOTICE year is wrong (it's 2019 now). TVM's NOTICE is missing
from MXNet's NOTICE file.
Could it possibly be fixed in the next release?
- Yes
I vote with:
+0 not sure if it should be released. Could mentors advise if we should fix
them before release?

[1] https://wiki.apache.org/incubator/IncubatorReleaseChecklist


On Thu, Jan 31, 2019 at 10:56 PM Lv, Tao A  wrote:

>
> +1. Verified below items:
>
> 1. Checkout code from tag 1.4.0rc2 and build mkldnn backend successfully
> on both cpu and gpu w/ mkl and openblas
> 2. ResNet50v1 FP32 performance looks good for both latency and throughput
> 3. Quantization script works well with ResNet50v1
> 4. ResNet50v1 INT8 model accuracy looks good
> 5. ResNet50v1 INT8 model performance speedup looks good for both latency
> and throughput
>
>
> -Original Message-
> From: kellen sunderland [mailto:kellen.sunderl...@gmail.com]
> Sent: Friday, February 1, 2019 11:45 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc2
>
> Great, thanks Steffen!  I added a few key files but missed that one.
>
> +1 from me.
>
> On Thu, Jan 31, 2019 at 9:35 AM Steffen Rochel 
> wrote:
>
> > Kellen - Sergey, the 1.4.0 release co-manager signed the tar file.
> > Please use his public key to validate the asc.
> > I was able to validate:
> >
> > curl https://dist.apache.org/repos/dist/dev/incubator/mxnet/KEYS -o
> > KEYS
> >
> > gpg --import KEYS
> >
> > gpg --verify apache-mxnet-src-1.4.0.rc2-incubating.tar.gz.asc
> >
> >
> > output:
> >
> > gpg: assuming signed data in
> 'apache-mxnet-src-1.4.0.rc2-incubating.tar.gz'
> >
> > gpg: Signature made Sat Jan 26 16:25:41 2019 PST
> >
> > gpg:using RSA key
> BD52136E76B7BD68E7843B0B591C06669F740FD7
> >
> > gpg: Good signature from "Sergey Kolychev "
> > [unknown]
> >
> > gpg: WARNING: This key is not certified with a trusted signature!
> >
> > gpg:  There is no indication that the signature belongs to the
> > owner.
> >
> > Primary key fingerprint: BD52 136E 76B7 BD68 E784  3B0B 591C 0666 9F74
> > 0FD7
> >
> >
> > Best,
> > Steffen
> >
> > On Wed, Jan 30, 2019 at 10:39 PM kellen sunderland <
> > kellen.sunderl...@gmail.com> wrote:
> >
> > > +0
> > >
> > > Overall release looks good.  Probably something I'm doing wrong, but
> > > so
> > far
> > > not able to validate the .asc.  I'm getting "Can't check signature:
> > > No public key".  I've added the keys from GitHub and the release
> > > folder, and also added your public key "40C9346904DFCE37" from the
> > > MIT key server Steffen.  Is there another key I'm missing?
> > >
> > > 1. sha512 look good.
> > > 2. Compile from source successfully
> > > 3. TensorRT build succeeds and runs inference for demo models 4.
> > > License, notice and disclaimer exist.
> > >
> > > -Kellen
> > >
> > > On Wed, Jan 30, 2019 at 8:58 PM Steffen Rochel
> > > 
> > > wrote:
> > >
> > > > Dear MXNet community -
> > > > we currently have three +1 votes, one binding.
> > > > As the vote did not reach the necessary number of binding votes
> > > > I'm extending voting.
> > > >
> > > > I'm calling on all PMC member, please test and vote.
> > > >
> > > > Regards,
> > > > Steffen
> > > >
> > > > On Wed, Jan 30, 2019 at 6:43 PM Aston Zhang
> > > > 
> > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > Tested with the Dive into Deep Learning book.
> > > > >
> > > > > On Wed, Jan 30, 2019 at 1:25 PM Steffen Rochel <
> > > steffenroc...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Thanks Carin and Yuxi.
> > > > > >
> > > > > > Committers and PMC members - please test and send your vote to
> > > 

Re: [Announce] Runtime feature detection

2019-01-25 Thread Sheng Zha
Hi Pedro,

Happy to help, though I was waiting for PR comments to be addressed. Currently 
the PR is close to complete, with some open comments to be resolved.

-sz

> On Jan 25, 2019, at 9:27 AM, Pedro Larroy  
> wrote:
> 
> That's Great! There's a PR that we should merge first which
> internalizes the enum inside the library as per Sheng's suggestion.
> 
> https://github.com/apache/incubator-mxnet/pull/13964
> 
> @Sheng could we merge the PR? so we can build on top of this feature?
> It's badly needed for tests suites etc.
> Thanks a lot!
> 
> Pedro.
> 
> 
>> On Fri, Jan 25, 2019 at 2:22 PM Iblis Lin  wrote:
>> 
>> Hi,
>> 
>> I added the Julia binding for it.
>> PR is here:
>> https://github.com/apache/incubator-mxnet/pull/13992
>> 
>> Iblis Lin
>> 林峻頤
>> 
>>> On 1/23/19 12:39 AM, Pedro Larroy wrote:
>>> Hi
>>> 
>>> I'm pleased to announce that runtime feature detection has been merged
>>> in master, thanks to Aaron for the merge and the many reviewers who
>>> gave feedback on the PR.  (
>>> https://github.com/apache/incubator-mxnet/pull/13549 )
>>> 
>>> As the functionality matures and is exposed through other bindings,
>>> please feel free to try and use it to build on it, for example for
>>> easier test suite selection depending on what's compiled in the
>>> engine.
>>> 
>>> Usage examples:
>>> 
>>> $ ipython
>>> In [4]: import mxnet.mxfeatures
>>> 
>>> In [5]: mxnet.mxfeatures.features_enabled()
>>> Out[5]:
>>> [,
>>> ,
>>> ,
>>> ,
>>> ,
>>> ,
>>> ,
>>> ,
>>> ,
>>> ,
>>> ]
>>> 
>>> In [6]: mxnet.mxfeatures.features_enabled_str()
>>> Out[6]: 'CPU_SSE, CPU_SSE2, CPU_SSE3, CPU_SSE4_1, CPU_SSE4_2, CPU_AVX,
>>> F16C, BLAS_OPEN, LAPACK, SIGNAL_HANDLER, DEBUG'
>>> 
>>> see also: help(mxnet.mxfeatures)
>>> 
>>> Regards.
>>> 


Re: Taxonomy on our cwiki

2019-01-22 Thread Sheng Zha
Thanks, Qing. Plan is in the email. I thought about suggesting a wiki
guideline, but on second thought I think once a good structure is in place,
things would self-organize in that structure, and I don't want to
intimidating people by forcing them to read :)

After experimenting, by using the move tool in cwiki it doesn't seem to
affect edit history or authorship, so I see no reason to bother others.

Given the positive feedback in the last couple of days, I will go ahead
move things around and report back the diff once finished (and I won't
remove or modify content).

-sz

On Tue, Jan 22, 2019 at 10:09 AM Qing Lan  wrote:

> Agreed +1.
> Could we draft a plan on CWIKI and let's sign up our name to migrate the
> pages to the right location?
>
> Thanks,
> Qing
>
> On 1/21/19, 6:18 AM, "Anton Chernov"  wrote:
>
> A quick tip about links to the wiki pages, note the difference in
> links:
>
> * https://cwiki.apache.org/confluence/display/MXNET/Release+Process
> (1)
> * https://cwiki.apache.org/confluence/x/BINjB (2)
>
> If sharing was done via the 'Share' menu the link (2) would persist
> after
> any structual movements.
>
> Best
> Anton
>
>
> сб, 19 янв. 2019 г. в 16:49, Pedro Larroy <
> pedro.larroy.li...@gmail.com>:
>
> > +1
> >
> > On Sat, Jan 19, 2019 at 2:51 PM Zhao, Patric 
> > wrote:
> > >
> > > +1, Good idea.
> > >
> > > It's not very easy to find out the related contents since lots of
> > folders in the website.
> > >
> > >
> > > > -Original Message-
> > > > From: Sheng Zha [mailto:zhash...@apache.org]
> > > > Sent: Saturday, January 19, 2019 3:28 AM
> > > > To: dev@mxnet.incubator.apache.org
> > > > Subject: Taxonomy on our cwiki
> > > >
> > > > Hi MXNet,
> > > >
> > > > Given that currently cwiki is the only place other than mxnet
> website
> > for
> > > > mxnet-related documentation, I'd like to request your attention
> to the
> > > > (slightly disorganized) cwiki page of MXNet. The top level
> folders
> > (and their
> > > > contents) currently looks like this:
> > > > - Design Proposals* (bag of proposals, not in order)
> > > > - Development* (mixture of guides, roadmaps, processes)
> > > > - Release Process (release notes)
> > > > - Website (guides and proposals)
> > > > - MXNet Clojure (call for contribution, guides)
> > > > - MXNet Keras Integration (design)
> > > > - MXNet-ONNX Integration (design, dev status)
> > > > - MXNet R Package (guide, backlog)
> > > > - MXNet-Scala (design, dev status, guide)
> > > > - Content Formatting Templates (not a folder but link to two
> docs)
> > > > - How-to articles (1 guide)
> > > > - Community (guide on apache-related processes)
> > > > - Data IO (designs)
> > > > - Continuous Integration (guides, designs)
> > > > - Meetups and Hangouts (events)
> > > >
> > > > And here are two good examples from successful Apache projects:
> > > > - Apache Flink: an **audience-oriented** structure [1]
> > > >   Users (Presentations and How-to)
> > > >   Contributors (Dev processes and How-to)
> > > >   Committers (Infra, Dev processes, Release processes, Releases)
> > > >   Roadmaps and Feature Designs (archive)
> > > > - Apache OpenNLP: a **content-oriented** structure [2]
> > > >   Guides
> > > >   External Resources
> > > >   Proposals
> > > >   Releasing
> > > >
> > > > Clean organization helps content discovery and saves time on
> locating
> > useful
> > > > content. Given that we have good amount of content on the wiki
> page, I
> > > > suggest that we decide on a cleaner taxonomy, re-organize
> contents
> > > > accordingly, and add future contents accordingly. To provide a
> > starting point
> > > > for the discussion, I suggest:
> > > > - Given the state we are in, start with content-oriented
> organization,
> > use
> > > > these top-level categories: Guides (including processes and
> how-tos),
> > > > Development (including designs, proposals, notes, roadmaps),
> Community
> > > > (including events, activities, external resources and contents)
> > > > - If people strongly prefer audience-oriented structure, later
> we can
> > adopt a
> > > > structure similar to Flink's.
> > > >
> > > > Feel free to share your thoughts and preferences here. Thanks.
> > > >
> > > > -sz
> > > >
> > > > [1]
> > > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Homehttp
> > > > s://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home
> > > > [2] https://cwiki.apache.org/confluence/display/OPENNLP/Index
> >
>
>
>


Taxonomy on our cwiki

2019-01-18 Thread Sheng Zha
Hi MXNet,

Given that currently cwiki is the only place other than mxnet website for
mxnet-related documentation, I'd like to request your attention to the
(slightly disorganized) cwiki page of MXNet. The top level folders (and
their contents) currently looks like this:
- Design Proposals* (bag of proposals, not in order)
- Development* (mixture of guides, roadmaps, processes)
- Release Process (release notes)
- Website (guides and proposals)
- MXNet Clojure (call for contribution, guides)
- MXNet Keras Integration (design)
- MXNet-ONNX Integration (design, dev status)
- MXNet R Package (guide, backlog)
- MXNet-Scala (design, dev status, guide)
- Content Formatting Templates (not a folder but link to two docs)
- How-to articles (1 guide)
- Community (guide on apache-related processes)
- Data IO (designs)
- Continuous Integration (guides, designs)
- Meetups and Hangouts (events)

And here are two good examples from successful Apache projects:
- Apache Flink: an **audience-oriented** structure [1]
  Users (Presentations and How-to)
  Contributors (Dev processes and How-to)
  Committers (Infra, Dev processes, Release processes, Releases)
  Roadmaps and Feature Designs (archive)
- Apache OpenNLP: a **content-oriented** structure [2]
  Guides
  External Resources
  Proposals
  Releasing

Clean organization helps content discovery and saves time on locating
useful content. Given that we have good amount of content on the wiki page,
I suggest that we decide on a cleaner taxonomy, re-organize contents
accordingly, and add future contents accordingly. To provide a starting
point for the discussion, I suggest:
- Given the state we are in, start with content-oriented organization, use
these top-level categories: Guides (including processes and how-tos),
Development (including designs, proposals, notes, roadmaps), Community
(including events, activities, external resources and contents)
- If people strongly prefer audience-oriented structure, later we can adopt
a structure similar to Flink's.

Feel free to share your thoughts and preferences here. Thanks.

-sz

[1]
https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Homehttps://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home
[2] https://cwiki.apache.org/confluence/display/OPENNLP/Index


Re: Design proposal - MXNet end to end models - Models with data transformations

2019-01-16 Thread Sheng Zha
Hi Sandeep,

Thanks for taking the initiative and sharing the proposal. It's great to
see the image operators being extended.

To summarize, the design for the first phase provides two alternatives:
  - D1: Use the existing approach of expressing data transformation
pipeline as hybrid block, and use operators for transformations to achieve
portability. Extend operators for performance and usability.
  - D2 (called alternative approach 1 in the proposal): Extend the model
export API to express the concept of auxiliary graphs in the same json
symbol file.

First on D2, the proposed addition of auxiliary graph seems neither
sufficient on itself, nor necessary. This is because this additional field
relies on operators and symbolic interface of mxnet. If one can use
HybridBlock to express the data preprocessing logic, this HybridBlock can
already, without the addition of the field, be easily exported and then
imported as a separate symbol from the model symbol, and used in other
language bindings for data preprocessing. On the other hand, if the logic
cannot be expressed as HybridBlock, then you still wouldn't be able to put
that in the auxiliary graph field anyway.

For D1, extending the image operators and rely on them for portability is
definitely the right direction and the shortest path. Since this approach
comes from GluonCV and is already available as part of the export helper
[1], there's nothing new to review on the approach.

On the specific PRs listed as part of D1:
- It is great to see that stu1130@ is implementing resize, center_crop, and
crop operators from scratch [2][3][4]. These features have long been
desired, kudos!
- It is very nice to see the GPU and batch support being added in to_tensor
and normalize operator PRs [5][6] that sandeep-krishnamurthy@ is working
on. Some minor issues:
  - These PRs seem to assume that these are not yet operators. But they
certainly are.
  - As a result of this assumption, they move the existing code to new
files. Generally we should minimize such no-op change as it causes trouble
in viewing the edit history, and do so only when absolutely necessary.
(and call for review to the community: if you love CV, help on these PRs is
much appreciated)

Finally, I have a suggestion on the review request. This proposal lists a
plan of four phases while only providing designs for the first one. In this
case, unless you have solutions to address them for the community to
review, the wishful future phases may be better suited for a separate
roadmap discussion. As I spent quite some time going through the proposal
but found little new approach to review, I'd suggest not mixing them in a
design proposal or review request next time.

Hope it helps.

-sz

[1]
https://github.com/dmlc/gluon-cv/blob/master/gluoncv/utils/export_helper.py
[2] https://github.com/apache/incubator-mxnet/pull/13611/files
[3] https://github.com/apache/incubator-mxnet/pull/13694/files
[4] https://github.com/apache/incubator-mxnet/pull/13679/files
[5] https://github.com/apache/incubator-mxnet/pull/13837/files
[6] https://github.com/apache/incubator-mxnet/pull/13802/files


On Wed, Jan 16, 2019 at 4:47 PM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> Hello Community,
>
> Me along with fellow MXNet contributors (Jake  >,
> Karan ) are working on the following
> problem:
> 1. Some of the data transformations used in training is applicable during
> inference. Most commonly transformations on validation data is same as
> transformations required during inference.
> 2. MXNet models do not contain data transformations as part of the graph.
> Making it harder, time consuming and duplicated effort to re create data
> transformation during inference. This problem is more evident in cross
> language use cases. Training in Gluon (Python) and inference in Java/C++.
>
> After few initial discussions with some of MXNet contributors (Zhi
> , Naveen , Sina
> ), design proposal, development plan, tasks,
> milestones and more details are captured in this document.
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+end+to+end+models
>
> Please do provide your feedback via comments in the document or on this
> e-mail. All contributions are welcome. I will be creating JIRA stories and
> issues for initial tasks identified.
>
> --
> Sandeep Krishnamurthy
>


Re: MXNET-1294: Priority-based parameter propagation for improved data parallel training throughput

2019-01-14 Thread Sheng Zha
Hi Anand,

Thanks for sharing the work and for offering to improve mxnet and ps-lite.

If you don't need to test the integration, you can
- fork dmlc/ps-lite
- make the changes
- send a pull request back to the repo, just as you would to mxnet

If you need to test the integration in mxnet first, you can
- fork both mxnet and ps-lite
- in your mxnet fork, switch the ps-lite to your own fork
- develop and test in your own forks
- once ready, send PR to ps-lite, followed by PR to mxnet.

Given that you intend to slice the gradients in to chunks and change the
order in which they are applied, I'm interested to see how you intend to
abstract them. If you have specific technical questions on either mxnet or
ps-lite, feel free to ask on github issues if that's easier for you.

-sz

On Mon, Jan 14, 2019 at 9:23 AM Anand J  wrote:

> Hi All,
>
> I'm planning to add some improvements to MXNet KVStore based on the
> ideas from my recent SysML'19 paper
> https://anandj.in/wp-content/uploads/sysml.pdf. I have created a JIRA
> ticket on this: https://issues.apache.org/jira/browse/MXNET-1294.
>
> The code changes I'm planning to do require changes in PS-Lite and
> KVStore. I'm new to MXNet developer community. It would be helpful If
> someone can give guidance on how should I be proceeding with this.
>
> Thanks,
> Anand
>


Re: [DISCUSS] Make MKLDNN as a default on Maven nightly build

2019-01-14 Thread Sheng Zha
+1 if the licensing aspect is ok. Since MKLDNN (open source apache 2
license) depends on MKLML (binary only) which carries its own license (see
below for the full text), we need to check if it's ok to include this
license in our binary distribution. Full text of the MKLML license:

Copyright (c) 2016-2018, Intel Corporation
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice,
  this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
  notice, this list of conditions and the following disclaimer in the
  documentation and/or other materials provided with the distribution.
* Neither the name of Intel Corporation nor the names of its
contributors
  may be used to endorse or promote products derived from this software
  without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

On Mon, Jan 14, 2019 at 10:56 AM Qing Lan  wrote:

> Hi all,
>
> I would like to raise a discussion on whether to make MKLDNN as a default
> in nightly build (1.5.0-SNAPSHOT) for MXNet Scala/Java binding. Currently
> Scala build with MKLDNN is supported since
> https://github.com/apache/incubator-mxnet/pull/13819 with CI. I do see
> the performance increase when dealing with the inference and it is also
> necessary to get it in nightly for beta-testing in order to make it
> official in 1.5.0.
>
> Thanks,
> Qing
>


[Anouncement] New Committer: Tao Lv

2018-11-26 Thread Sheng Zha
We are pleased to announce Tao Lv as a new committer of Apache
MXNet. Tao's sustained contribution to the project has been greatly helping
the CPU performance of MXNet.

Please join me to welcome Tao to the team!

-sz


Re: MXNet - Gluon - Audio

2018-11-20 Thread Sheng Zha
Hi Gaurav,

The performance concerns is not just around librosa, but also the way to 
integrate it. librosa as a python library requires holding GIL when calling it, 
which makes it hard for asynchronous data preprocessing during training. Also, 
the API design hasn't been verified on the more full-fledged use cases that you 
outlined. That, and based on the lack of expertise of audio processing 
reviewing the design doc, my suggestion is to continue the work as a Gluon 
example, until other use cases are adopted, which is what you started in 
https://github.com/apache/incubator-mxnet/pull/13325. Once you make more 
progress and become more familiar with Gluon design, please report back to this 
thread and I'd be happy to help more on the review.

-sz

On 2018/11/20 19:20:18, Gaurav Gireesh  wrote: 
> Hi All!
> Following up on this PR:
> https://github.com/apache/incubator-mxnet/pull/13241
> I would need some comments or feedback regarding the API design :
> https://cwiki.apache.org/confluence/display/MXNET/Gluon+-+Audio
> 
> The comments on the PR were mostly around *librosa *and its performance
> being a blocker if and when the designed API can be tested with bigger ASR
> models DeepSpeech 2, DeepSpeech 3.
> I would appreciate if the community provides their expertise/knowledge on
> loading audio data and feature extraction used currently with bigger ARS
> models.
> If there is anything in design which may be changed/improved that will
> improve the performance, I ll be happy to look into this.
> 
> Thanks and regards,
> Gaurav Gireesh
> 
> On Thu, Nov 15, 2018 at 10:47 AM Gaurav Gireesh 
> wrote:
> 
> > Hi Lai!
> > Thank you for your comments!
> > Below are the answers to your comments/queries:
> > 1) That's a good suggestion. However, I have added an example in the Pull
> > request related to this:
> > https://github.com/apache/incubator-mxnet/pull/13241/commits/eabb68256d8fd603a0075eafcd8947d92e7df27f
> > .
> > I would be happy to include a dataset similar to MNIST to support that. I
> > have come across an example dataset used in tensor flow speech
> > related example here
> > . This
> > could be included.
> >
> > 2) Thank you for the suggestion, I shall look into the FFT operator that
> > you have pointed out. However, there are other kind of features like, mfcc,
> > mels and so on which are popular in audio data feature extraction, which
> > will find utility if implemented. I am not sure if we have operators for
> > this.
> >
> > 3) The references look good too. I shall look into them. Thank you for
> > bringing them into my notice.
> >
> > Regards,
> > Gaurav
> >
> > On Tue, Nov 13, 2018 at 11:22 AM Lai Wei  wrote:
> >
> >> Hi Gaurav,
> >>
> >> Thanks for starting this. I see the PR is out
> >> , left some initial
> >> reviews, good work!
> >>
> >> In addition to Sandeep's queries, I have the following:
> >> 1. Can we include some simple classic audio dataset for users to directly
> >> import and try out? like MNIST in vision. (e.g.:
> >> http://pytorch.org/audio/datasets.html#yesno)
> >> 2. Librosa provides some good audio feature extractions, we can use it for
> >> now. But it's slow as you have to do conversions between ndarray and
> >> numpy.
> >> In the long term, can we make transforms to use mxnet operators and change
> >> your transforms to hybrid blocks? For example, mxnet FFT
> >> <
> >> https://mxnet.apache.org/api/python/ndarray/contrib.html?highlight=fft#mxnet.ndarray.contrib.fft
> >> >
> >> operator
> >> can be used in a hybrid block transformer, which will be a lot faster.
> >>
> >> Some additional references on users already using mxnet on audio, we
> >> should
> >> aim to make it easier and automate the file load/preprocess/transform
> >> process.
> >> 1. https://github.com/chen0040/mxnet-audio
> >> 2. https://github.com/shuokay/mxnet-wavenet
> >>
> >> Looking forward to seeing this feature out.
> >> Thanks!
> >>
> >> Best Regards
> >>
> >> Lai
> >>
> >>
> >> On Tue, Nov 13, 2018 at 9:09 AM sandeep krishnamurthy <
> >> sandeep.krishn...@gmail.com> wrote:
> >>
> >> > Thanks, Gaurav for starting this initiative. The design document is
> >> > detailed and gives all the information.
> >> > Starting to add this in "Contrib" is a good idea while we expect a few
> >> > rough edges and cleanups to follow.
> >> >
> >> > I had the following queries:
> >> > 1. Is there any analysis comparing LibROSA with other libraries? w.r.t
> >> > features, performance, community usage in audio data domain.
> >> > 2. What is the recommendation of LibROSA dependency? Part of MXNet PyPi
> >> or
> >> > ask the user to install if required? I prefer the latter, similar to
> >> > protobuf in ONNX-MXNet.
> >> > 3. I see LibROSA is a fully Python-based library. Are we getting
> >> blocked on
> >> > the dependency for future use cases when we want to make
> >> transformations as
> >> > 

Re: LabelBot New Design in Production

2018-11-16 Thread Sheng Zha
Thanks, Harsh. I saw that this was created and used on several issues and I
removed it for now because:

- the issues that they are used on issues that don't seem to be resolved.

- it leaves the impression to the requesters that people think their issues
are not worth people's attention in this community, which seems unwelcoming.

- it seems to be equivalent to issues "some existing label" + "for a long
time", which means it doesn't add value to classifying the issues.


If the goal is to identify stale issues, how about create an issue or a
wiki page, and have a script to update the stale issue list periodically?
This way, committers can always go visit that issue/wikipage and help with
the stale issues. It also forms the basis for a public dashboard for other
aspects of the project, which is likely worthwhile.


What do you think?



Best regards,

-sz

On Fri, Nov 16, 2018 at 11:41 AM Harsh Patel 
wrote:

> Hey all,
> To help with how we handle issues for MXNet, I am proposing a new label be
> created called: [suggest-closed]. I, alongside many others, observe many
> stale issues which can be candidates for closure and searching for these of
> the 800+ issues we have is a daunting task. This label is meant to help tag
> issues which the community believes should be closed. To clarify, this is
> not meant to actually close issues, it is simply a suggestion which
> contributors can feel free to label. If I am able to get a committer to
> help create this that would be great!
>
> Best,
> -Harsh
>
> On Thu, Nov 8, 2018 at 11:28 PM Hagay Lupesko  wrote:
>
> >
> > > improve over time (think about it recommending you to check out the
> > discuss
> > > forum when you ask a question, asking you to provide a minimum
> > reproducible
> > > example if you report a bug, etc). That way, we would reduce the amount
> > > boilerplate in the issue template and at the same time provide the user
> > > with custom tailored assistance.
> > >
> > > Best regards,
> > > Marco
> > >
> > > On Fri, Nov 9, 2018 at 1:00 AM Naveen Swamy 
> wrote:
> > >
> > > > Great job!, this is very helpful to triage issues!, users when
> > creating a
> > > > new Issue could themselves tag the issues. May be we should add that
> to
> > > the
> > > > issue template?
> > > >
> > > > On Thu, Nov 8, 2018 at 3:54 PM Harsh Patel <
> harshpatel081...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Hey all,
> > > > > The upgraded label bot has been pushed into production. Current
> > > > > functionality includes
> > > > > add, delete, and update.
> > > > > (i.e. @mxnet-label-bot add ['label']
> > > > > @mxnet-label-bot remove ['label']
> > > > > @mxnet-label-bot update ['label'])
> > > > >
> > > > > Users should feel free to leave suggestions and any potential
> issues.
> > > The
> > > > > forum to this best would be here:
> > > > > https://github.com/apache/incubator-mxnet/issues/13163
> > > > >
> > > > > Best,
> > > > > -Harsh Patel
> > > > >
> > > >
> > >
> >
>


Re: [Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Sheng Zha
I was in the middle of transferring all items labeled with "Feature" to the
"Feature request" label when "Feature" label was deleted. I'm not sure who
deleted the "Feature" label but it's gone now.

-sz

On Tue, Nov 13, 2018 at 5:05 PM Anirudh Acharya 
wrote:

> This issue was raised before here -
>
> https://lists.apache.org/thread.html/3e988e6bd82cb2d69ba20c21bf763952ed22a5732e61f6fba1f89ac8@%3Cdev.mxnet.apache.org%3E
>
> We need someone with committer privileges to fix it.
>
>
> Thanks
> Anirudh
>
>
>
> On Tue, Nov 13, 2018 at 4:36 PM Lin Yuan  wrote:
>
> > Dear Community,
> >
> > I often see there are "Feature" and "Feature request" labels in Github
> > issues. May I know the difference? If they are meant to be the same
> thing,
> > can we only keep one of them?
> >
> > Thanks,
> >
> > Lin
> >
>


Re: [Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Sheng Zha
Oh, I see. I was moving the other 80 or so, so it was probably a
race-condition.
Anyway, thanks for being eager to help.

-sz

On Tue, Nov 13, 2018 at 5:24 PM Naveen Swamy  wrote:

> done now, removed the feature label, there were 4 issues with that label
> but also had Feature Request.
>
> On Tue, Nov 13, 2018 at 5:05 PM Anirudh Acharya 
> wrote:
>
> > This issue was raised before here -
> >
> >
> https://lists.apache.org/thread.html/3e988e6bd82cb2d69ba20c21bf763952ed22a5732e61f6fba1f89ac8@%3Cdev.mxnet.apache.org%3E
> >
> > We need someone with committer privileges to fix it.
> >
> >
> > Thanks
> > Anirudh
> >
> >
> >
> > On Tue, Nov 13, 2018 at 4:36 PM Lin Yuan  wrote:
> >
> > > Dear Community,
> > >
> > > I often see there are "Feature" and "Feature request" labels in Github
> > > issues. May I know the difference? If they are meant to be the same
> > thing,
> > > can we only keep one of them?
> > >
> > > Thanks,
> > >
> > > Lin
> > >
> >
>


Re: MKLDNN dynamically linked

2018-11-08 Thread Sheng Zha
+1. Ideally, MKLDNN can be statically linked. mxnet-mkl relies on Make for 
building it so help is wanted on mxnet.

-sz

On 2018/11/08 21:28:50, Alex Zai  wrote: 
> Currently in mxnet-mkl the libmxnet.so is dynamically linked to to
> libmkldnn.so.0. This is known to cause some issues if the wrong version of
> mkldnn is linked. Can we static link this file instead?
> 
> Alex
> 


Re: [RESULT][LAZY VOTE] Next MXNet release

2018-11-07 Thread Sheng Zha
Reporting back on this thread, I received feedbacks with some valid
concerns:
- There are different parties that are already working toward the
previously communicated timeline.
- It creates a hassle for people.

Although I'd love to see the release happening soon, let's keep the
existing timeline. Next time when a release happens, we can consider
releasing multiple versions at the same time with more thought and
coordination.

-sz

On Wed, Nov 7, 2018 at 8:50 AM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> +1 to trying to get a 1.4.0 Nov release.  I think the MKLDNN work alone is
> a headline feature that users would love to get their hands on.
>
> On Tue, Nov 6, 2018 at 11:32 PM Sheng Zha  wrote:
>
> > I'd like to propose that we expedite the 1.4.0 release slightly as there
> > doesn't seem to be a rule that prevents a minor release from happening at
> > the same time of a patch release. This would shorten the time it takes
> for
> > new features to reach users. Proposed revision to the timeline:
> > - Code freeze: 11/9
> > - Release published: 11/22
> >
> > If there's no issue about both the proposal and new timeline, I'd be
> happy
> > to manage 1.4.0 release as release manager.
> >
> > -sz
> >
> > On Thu, Nov 1, 2018 at 7:56 AM Steffen Rochel 
> > wrote:
> >
> > > There have been no objections, so lazy vote passed.
> > > Anton volunteered to manage the 1.3.1 release and Naveen will support
> him
> > > as co-manager to handle the release tasks requiring committer powers.
> > > Please support Anton for a smooth 1.3.1 release process.
> > >
> > > I'm still looking for volunteers to manage / co-manage the 1.4.0
> release.
> > >
> > > Regards,
> > > Steffen
> > >
> > > On Sun, Oct 28, 2018 at 7:33 PM Steffen Rochel <
> steffenroc...@gmail.com>
> > > wrote:
> > >
> > > > I calling a lazy vote to release MXNet
> > > > 1.3.1 (patch release) and 1.4.0 (minor relase).
> > > >
> > > > Release content: release proposal page
> > > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release
> > > >
> > > >
> > > > Target milestones:
> > > > *1.3.1*
> > > >
> > > >- Code Freeze: 10/31
> > > >- Release published: 11/13
> > > >
> > > > *1.4.0:*
> > > >
> > > >- Code Freeze: 11/13
> > > >- Release published: 12/13 (if possible announce during NIPS)
> > > >
> > > >
> > > > The vote will be open until Wednesday October 31, 2018 8.00pm PDT.
> > > >
> > > > Regards,
> > > > Steffen
> > > >
> > > > On Fri, Oct 26, 2018 at 7:56 AM Steffen Rochel <
> > steffenroc...@gmail.com>
> > > > wrote:
> > > >
> > > >> During the Hangout on Wednesday multiple release proposals have been
> > > >> discussed. I summarized discussion here
> > > >> <
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Hangout+October+24th+2018+8am+and+5pm+PDT
> > >
> > > and
> > > >> updated the release proposal page
> > > >> <
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release
> > > >
> > > >> .
> > > >> Please review, provide feedback and propose changes.
> > > >> I plan to start a lazy vote on Sunday regarding the release
> proposal.
> > > >>
> > > >> Calling for volunteers to manage the 1.3.1 and 1.4.0 release.
> > > >>
> > > >> Regards,
> > > >> Steffen
> > > >>
> > > >> On Tue, Oct 9, 2018 at 7:20 AM kellen sunderland <
> > > >> kellen.sunderl...@gmail.com> wrote:
> > > >>
> > > >>> Hey Steffen,
> > > >>>
> > > >>> Recommend these be merged into patch release:
> > > >>>
> > > >>> https://github.com/apache/incubator-mxnet/pull/12631
> > > >>> https://github.com/apache/incubator-mxnet/pull/12603
> > > >>> https://github.com/apache/incubator-mxnet/pull/12499
> > > >>>
> > > >>> -Kellen
> > > >>>
> > > >>> On Tue, Oct 2, 2018 at 7:17 AM Zhao, Patric  >
> > > >>> wrote:
> >

Re: [Announce] Upcoming Apache MXNet (incubating) 1.3.1 patch release

2018-11-07 Thread Sheng Zha
Hi Anton,

I hear your concern about a simultaneous 1.4.0 release and it certainly is a 
valid one.

Regarding the release, let’s agree on the language first. According to 
semver.org, 1.3.1 release is considered patch release, which is for backward 
compatible bug fixes, while 1.4.0 release is considered minor release, which is 
for backward compatible new features. A major release would mean 2.0.

The three PRs suggested by Haibin and Lin are all introducing new features. If 
they go into a patch release, it would require an exception accepted by the 
community. Also, if other violation happens it could be ground for declining a 
release during votes.

-sz

> On Nov 7, 2018, at 2:25 AM, Anton Chernov  wrote:
> 
> [MXNET-1179] Enforce deterministic algorithms in convolution layers


Re: [RESULT][LAZY VOTE] Next MXNet release

2018-11-06 Thread Sheng Zha
I'd like to propose that we expedite the 1.4.0 release slightly as there
doesn't seem to be a rule that prevents a minor release from happening at
the same time of a patch release. This would shorten the time it takes for
new features to reach users. Proposed revision to the timeline:
- Code freeze: 11/9
- Release published: 11/22

If there's no issue about both the proposal and new timeline, I'd be happy
to manage 1.4.0 release as release manager.

-sz

On Thu, Nov 1, 2018 at 7:56 AM Steffen Rochel 
wrote:

> There have been no objections, so lazy vote passed.
> Anton volunteered to manage the 1.3.1 release and Naveen will support him
> as co-manager to handle the release tasks requiring committer powers.
> Please support Anton for a smooth 1.3.1 release process.
>
> I'm still looking for volunteers to manage / co-manage the 1.4.0 release.
>
> Regards,
> Steffen
>
> On Sun, Oct 28, 2018 at 7:33 PM Steffen Rochel 
> wrote:
>
> > I calling a lazy vote to release MXNet
> > 1.3.1 (patch release) and 1.4.0 (minor relase).
> >
> > Release content: release proposal page
> > <
> https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release
> >
> >
> > Target milestones:
> > *1.3.1*
> >
> >- Code Freeze: 10/31
> >- Release published: 11/13
> >
> > *1.4.0:*
> >
> >- Code Freeze: 11/13
> >- Release published: 12/13 (if possible announce during NIPS)
> >
> >
> > The vote will be open until Wednesday October 31, 2018 8.00pm PDT.
> >
> > Regards,
> > Steffen
> >
> > On Fri, Oct 26, 2018 at 7:56 AM Steffen Rochel 
> > wrote:
> >
> >> During the Hangout on Wednesday multiple release proposals have been
> >> discussed. I summarized discussion here
> >> <
> https://cwiki.apache.org/confluence/display/MXNET/Hangout+October+24th+2018+8am+and+5pm+PDT>
> and
> >> updated the release proposal page
> >> <
> https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release
> >
> >> .
> >> Please review, provide feedback and propose changes.
> >> I plan to start a lazy vote on Sunday regarding the release proposal.
> >>
> >> Calling for volunteers to manage the 1.3.1 and 1.4.0 release.
> >>
> >> Regards,
> >> Steffen
> >>
> >> On Tue, Oct 9, 2018 at 7:20 AM kellen sunderland <
> >> kellen.sunderl...@gmail.com> wrote:
> >>
> >>> Hey Steffen,
> >>>
> >>> Recommend these be merged into patch release:
> >>>
> >>> https://github.com/apache/incubator-mxnet/pull/12631
> >>> https://github.com/apache/incubator-mxnet/pull/12603
> >>> https://github.com/apache/incubator-mxnet/pull/12499
> >>>
> >>> -Kellen
> >>>
> >>> On Tue, Oct 2, 2018 at 7:17 AM Zhao, Patric 
> >>> wrote:
> >>>
> >>> > Thanks to let us know this discussion.
> >>> > Because we don't have enough bandwidth to track the different
> sources,
> >>> > like discussion forum.
> >>> >
> >>> > I think the best way is to open issue in the github so that we can
> >>> > answer/solve the issue in time :)
> >>> >
> >>> > Thanks,
> >>> >
> >>> > --Patric
> >>> >
> >>> > > -Original Message-
> >>> > > From: Afrooze, Sina [mailto:sina@gmail.com]
> >>> > > Sent: Tuesday, October 2, 2018 1:14 AM
> >>> > > To: dev@mxnet.incubator.apache.org
> >>> > > Cc: Ye, Jason Y ; Zai, Alexander
> >>> > > ; Zheng, Da 
> >>> > > Subject: Re: [Discuss] Next MXNet release
> >>> > >
> >>> > > This post suggests there is a regression from 1.1.0 to 1.2.1
> related
> >>> to
> >>> > > MKLDNN integration:
> >>> https://discuss.mxnet.io/t/mxnet-1-2-1-module-get-
> >>> > > outputs/1882
> >>> > >
> >>> > > The error is related to MKLDNN layout not being converted back to
> >>> MXNet
> >>> > > layout in some operator: " !IsMKLDNNData() We can’t generate TBlob
> >>> for
> >>> > > MKLDNN data. Please use Reorder2Default() to generate a new NDArray
> >>> > > first"
> >>> > >
> >>> > > Sina
> >>> > >
> >>> > >
> >>> > >
> >>> > >
> >>> > > On 9/30/18, 6:55 PM, "Steffen Rochel" 
> >>> wrote:
> >>> > >
> >>> > > Thanks Patrick.
> >>> > > Updated roadmap and next release content.
> >>> > >
> >>> > > Patrick - suggest to send a reminder to review the design doc
> and
> >>> > collect
> >>> > > feedback.
> >>> > > Are there still known issues or gaps before we declare MKL-DNN
> >>> > > integration
> >>> > > as GA?
> >>> > >
> >>> > > Regards,
> >>> > > Steffen
> >>> > >
> >>> > > On Sat, Sep 29, 2018 at 1:31 AM Zhao, Patric <
> >>> patric.z...@intel.com>
> >>> > > wrote:
> >>> > >
> >>> > > > Thanks, Steffen.
> >>> > > >
> >>> > > > Regarding the next release note, two items from our side:
> >>> > > >
> >>> > > > 1. (-remove) MKL-DNN integration is done. I think we can
> remove
> >>> > this
> >>> > > item.
> >>> > > > 2. (+add) MKL-DNN based graph optimization and quantization
> by
> >>> > > subgraph
> >>> > > > Design doc:
> >>> > > >
> >>> > >
> >>> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimiz
> >>> > > 

Re: [Announce] Upcoming Apache MXNet (incubating) 1.3.1 patch release

2018-11-06 Thread Sheng Zha
Similar to the two PRs that Haibin suggested, 12992 introduces new interface 
for controlling determinism, which is better suited for minor release.

I think other than lack of release manager to drive 1.4.0 release, there’s no 
reason we cannot do two releases (1.4.0 & 1.3.1) at the same time. I’m willing 
to help with the 1.4.0 release to make these new features available one month 
sooner, if there’s no other concern.

-sz

> On Nov 6, 2018, at 3:30 PM, Lin Yuan  wrote:
> 
> Hi Anton,
> 
> Thanks for helping the release.
> The following PRs are needed by customers who want to use deterministic
> CUDNN convolution algorithms:
> 
> https://github.com/apache/incubator-mxnet/pull/12992
> https://github.com/apache/incubator-mxnet/pull/13049
> 
> Thanks!
> 
> Lin
> 
> 
> On Tue, Nov 6, 2018 at 1:51 PM Aaron Markham 
> wrote:
> 
>> Hi Anton,
>> I have the following suggestions for fixes to include in 1.3.1. These each
>> have updates to files that will impact docs generation for the 1.3.x
>> version of the website's Python API docs:
>> 
>> https://github.com/apache/incubator-mxnet/pull/12879
>> https://github.com/apache/incubator-mxnet/pull/12871
>> https://github.com/apache/incubator-mxnet/pull/12856
>> 
>> Thanks,
>> Aaron
>> 
>>> On Tue, Nov 6, 2018 at 1:29 PM Lai Wei  wrote:
>>> 
>>> Hi Anton,
>>> 
>>> Thanks for driving this, I would like to include the following fix in
>>> 1.3.1:
>>> Allow infer shape partial on foreach operator:
>>> https://github.com/apache/incubator-mxnet/pull/12471
>>> 
>>> Keras-MXNet needs this functionality to infer shape partially
>>> on foreach operator. (Used in RNN operators)
>>> 
>>> Thanks a lot!
>>> 
>>> 
>>> Best Regards
>>> Lai Wei
>>> 
>>> 
>>> 
>>> On Tue, Nov 6, 2018 at 10:44 AM Haibin Lin 
>>> wrote:
>>> 
 Hi Naveen and Anton,
 
 Thanks for pointing that out. You are right that these are not critical
 fixes. Putting them in 1.4.0 is more appropriate. PRs are closed.
 
 Best,
 Haibin
 
 On Tue, Nov 6, 2018 at 7:35 AM Naveen Swamy 
>> wrote:
 
> Please note that this is a patch release(1.3.1) to address critical
 bugs!,
> For everything else please wait for 1.4.0 which is planned very
>> shortly
> after 1.3.1
> 
>> On Nov 6, 2018, at 7:17 AM, Anton Chernov 
>>> wrote:
>> 
>> The following PR's have been created so far:
>> 
>> Infer dtype in SymbolBlock import from input symbol (v1.3.x)
>> https://github.com/apache/incubator-mxnet/pull/13117
>> 
>> [MXNET-953] Fix oob memory read (v1.3.x)
>> https://github.com/apache/incubator-mxnet/pull/13118
>> 
>> [MXNET-969] Fix buffer overflow in RNNOp (v1.3.x)
>> https://github.com/apache/incubator-mxnet/pull/13119
>> 
>> [MXNET-922] Fix memleak in profiler (v1.3.x)
>> https://github.com/apache/incubator-mxnet/pull/13120
>> 
>> Set correct update on kvstore flag in dist_device_sync mode
>> (v1.3.x)
>> https://github.com/apache/incubator-mxnet/pull/13121
>> 
>> update mshadow (v1.3.x)
>> https://github.com/apache/incubator-mxnet/pull/13122
>> 
>> CudnnFind() usage improvements (v1.3.x)
>> https://github.com/apache/incubator-mxnet/pull/13123
>> 
>> Fix lazy record io when used with dataloader and multi_worker > 0
> (v1.3.x)
>> https://github.com/apache/incubator-mxnet/pull/13124
>> 
>> 
>> As stated previously I would be rather opposed to have following
>> PR's
 it
> in
>> the patch release:
>> 
>> Gluon LSTM Projection and Clipping Support (#13055) v1.3.x
>> https://github.com/apache/incubator-mxnet/pull/13129
>> 
>> sample_like operators (#13034) v1.3.x
>> https://github.com/apache/incubator-mxnet/pull/13130
>> 
>> 
>> Best
>> Anton
>> 
>> вт, 6 нояб. 2018 г. в 16:06, Anton Chernov :
>> 
>>> Hi Haibin,
>>> 
>>> I have a few comments regarding the proposed performance
>> improvement
>>> changes.
>>> 
>>> CUDNN support for LSTM with projection & clipping
>>> https://github.com/apache/incubator-mxnet/pull/13056
>>> 
>>> There is no doubt that this change brings value, but I don't see
>> it
 as a
>>> critical bug fix. I would rather leave it for the next major
>>> release.
>>> 
>>> sample_like operators
>>> https://github.com/apache/incubator-mxnet/pull/13034
>>> 
>>> Even if it's related to performance, this is an addition of
> functionality
>>> and I would also push this to be in the next major release only.
>>> 
>>> 
>>> Best
>>> Anton
>>> 
>>> 
>>> вт, 6 нояб. 2018 г. в 15:55, Anton Chernov :
>>> 
 Hi Patric,
 
 This change was listed in the 'PR candidates suggested for
> consideration
 for v1.3.1 patch release' section [1].
 
 You are right, I also think that this is not a critical hotfix
>>> change
 that should be included 

Security Risk in requests<2.20 (CVE-2018-18074)

2018-10-29 Thread Sheng Zha
See https://github.com/apache/incubator-mxnet/issues/13032


Re: reject

2018-09-22 Thread Sheng Zha
Hi Craig,

Thank you for catching that. I believe the download page is fixed now. Let
us know if you see any more problems. If not, I can resend the announcement
if needed.

-sz

On Wed, Sep 19, 2018 at 8:02 PM Private LIst Moderation <
mod-priv...@gsuite.cloud.apache.org> wrote:

> I'm afraid the downloads page still does not meet requirements.
>
> 1. The artifact must link to a mirror site, e.g. the dyn/closer page (not
> to github)
>
> 2. The checksum and signature must link directly to the apache.org/dist
> site (not to github or a mirror)
>
> Please update the downloads page and let us know when you've done so.
>
> Regards,
>
> Craig
>
> > Begin forwarded message:
> >
> > From: announce-reject-1537392657.65969.akeecpnfdegkalmpn...@apache.org
> > Subject: MODERATE for annou...@apache.org
> > Date: September 19, 2018 at 2:30:57 PM PDT
> > To: Recipient list not shown: ;
> > Cc: announce-allow-tc.1537392657.afleppkblokjjcklocac-zhasheng=
> apache@apache.org
> > Reply-To:
> announce-accept-1537392657.65969.akeecpnfdegkalmpn...@apache.org
> >
> >
> > To approve:
> >   announce-accept-1537392657.65969.akeecpnfdegkalmpn...@apache.org
> > To reject:
> >   announce-reject-1537392657.65969.akeecpnfdegkalmpn...@apache.org
> > To give a reason to reject:
> > %%% Start comment
> > %%% End comment
> >
> >
> > From: Sheng Zha 
> > Subject: [ANNOUNCE] Apache MXNet (incubating) 1.3.0 Release
> > Date: September 19, 2018 at 2:31:18 PM PDT
> > To: annou...@apache.org
> >
> >
> > Hello all,
> >
> > The Apache MXNet (incubating) Community announces the availability of
> > Apache MXNet (incubating) 1.3.0!
> >
> > Release blog post:
> >
> https://blogs.apache.org/mxnet/entry/announcing-apache-mxnet-incubating-1
> >  <
> https://blogs.apache.org/mxnet/entry/announcing-apache-mxnet-incubating-1>
> >
> https://medium.com/apache-mxnet/announcing-apache-mxnet-1-3-0-484ea78c22ad
> <
> https://medium.com/apache-mxnet/announcing-apache-mxnet-1-3-0-484ea78c22ad
> >
> >
> > Apache MXNet (incubating) is a deep learning framework designed for
> > both efficiency and flexibility. It allows you to mix symbolic and
> > imperative programming to maximize efficiency and productivity.
> >
> > This release improves usability, performance, and interoperability.
> >
> > A full list of the changes in this release can be found in the release
> > notes:
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Notes
> <
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Notes
> >
> >
> > A Link to the Download is here:
> > http://mxnet.incubator.apache.org/install/download.html <
> http://mxnet.incubator.apache.org/install/download.html>
> >
> > If you prefer to build from source and experiment with various
> > compile-time configuration options, use this link to get the
> > instructions:
> > http://mxnet.incubator.apache.org/install/index.html <
> http://mxnet.incubator.apache.org/install/index.html>
> >
> > Or You can download and play with MXNet easily using one of the options
> > below:
> >1. The Pip packages can be found here:
> https://pypi.python.org/pypi/mxnet <https://pypi.python.org/pypi/mxnet>
> >2. The Docker Images can be found here:
> > https://hub.docker.com/r/mxnet/python/ <
> https://hub.docker.com/r/mxnet/python/>
> >
> > Links in Maven to the published Scala packages:
> >
> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/
> <
> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/
> >
> > https://repository.apache.org/#nexus-search;quick~org.apache.mxnet <
> https://repository.apache.org/#nexus-search;quick~org.apache.mxnet>
> >
> > and to the experimental Clojure packages:
> >
> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/contrib/clojure/
> <
> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/contrib/clojure/
> >
> >
> > The release tag used for the 1.3.0 release is:
> > https://github.com/apache/incubator-mxnet/tree/1.3.0 <
> https://github.com/apache/incubator-mxnet/tree/1.3.0>
> >
> > Some more MXNet Resources:
> >1. Issues: https://github.com/apache/incubator-mxnet/issues <
> https://github.com/apache/incubator-mxnet/issues>
> >2. Wiki: https://cwiki.apache.org/confluence/displ

[ANNOUNCE] Apache MXNet (incubating) 1.3.0 Release

2018-09-19 Thread Sheng Zha
Hello all,

The Apache MXNet (incubating) Community announces the availability of
Apache MXNet (incubating) 1.3.0!

Release blog post:
https://blogs.apache.org/mxnet/entry/announcing-apache-mxnet-incubating-1
https://medium.com/apache-mxnet/announcing-apache-mxnet-1-3-0-484ea78c22ad

Apache MXNet (incubating) is a deep learning framework designed for
both efficiency and flexibility. It allows you to mix symbolic and
imperative programming to maximize efficiency and productivity.

This release improves usability, performance, and interoperability.

A full list of the changes in this release can be found in the release
notes:
https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Notes

A Link to the Download is here:
*http://mxnet.incubator.apache.org/install/download.html
<http://mxnet.incubator.apache.org/install/download.html>*

If you prefer to build from source and experiment with various
compile-time configuration options, use this link to get the
instructions:
http://mxnet.incubator.apache.org/install/index.html

Or You can download and play with MXNet easily using one of the options
below:
   1. The Pip packages can be found here: https://pypi.python.org/pypi/mxnet
   2. The Docker Images can be found here:
https://hub.docker.com/r/mxnet/python/

Links in Maven to the published Scala packages:
https://repository.apache.org/content/repositories/releases/org/apache/mxnet/
https://repository.apache.org/#nexus-search;quick~org.apache.mxnet

and to the experimental Clojure packages:
https://repository.apache.org/content/repositories/releases/org/apache/mxnet/contrib/clojure/

The release tag used for the 1.3.0 release is:
https://github.com/apache/incubator-mxnet/tree/1.3.0

Some more MXNet Resources:
   1. Issues: https://github.com/apache/incubator-mxnet/issues
   2. Wiki: https://cwiki.apache.org/confluence/display/MXNET


If you want to learn more about MXNet visit
http://mxnet.incubator.apache.org/

Finally, you are welcome to join and also invite your friends to the
dynamic and growing MXNet community by subscribing to
dev@mxnet.incubator.apache.org


Acknowledgments:
We would like to thank everyone who contributed to the 1.3.0 release:

Aaron Markham, Abhinav Sharma, access2rohit, Alex Li, Alexander Alexandrov,
Alexander Zai, Amol Lele, Andrew Ayres, Anirudh Acharya, Anirudh
Subramanian, Ankit Khedia, Anton Chernov, aplikaplik, Arunkumar V Ramanan,
Asmus Hetzel, Aston Zhang, bl0, Ben Kamphaus, brli, Burin Choomnuan,
Burness Duan, Caenorst, Cliff Woolley, Carin Meier, cclauss, Carl Tsai,
Chance Bair, chinakook, Chudong Tian, ciyong, ctcyang, Da Zheng, Dang Trung
Kien, Deokjae Lee, Dick Carter, Didier A., Eric Junyuan Xie, Faldict, Felix
Hieber, Francisco Facioni, Frank Liu, Gnanesh, Hagay Lupesko, Haibin Lin,
Hang Zhang, Hao Jin, Hao Li, Haozhi Qi, hasanmua, Hu Shiwen, Huilin Qu,
Indhu Bharathi, Istvan Fehervari, JackieWu, Jake Lee, James MacGlashan,
jeremiedb, Jerry Zhang, Jian Guo, Jin Huang, jimdunn, Jingbei Li, Jun Wu,
Kalyanee Chendke, Kellen Sunderland, Kovas Boguta, kpmurali, Kurman
Karabukaev, Lai Wei, Leonard Lausen, luobao-intel, Junru Shao, Lianmin
Zheng, Lin Yuan, lufenamazon, Marco de Abreu, Marek Kolodziej, Manu Seth,
Matthew Brookhart, Milan Desai, Mingkun Huang, miteshyh, Mu Li, Nan Zhu,
Naveen Swamy, Nehal J Wani, PatricZhao, Paul Stadig, Pedro Larroy,
perdasilva, Philip Hyunsu Cho, Pishen Tsai, Piyush Ghai, Pracheer Gupta,
Przemyslaw Tredak, Qiang Kou, Qing Lan, qiuhan, Rahul Huilgol, Rakesh
Vasudevan, Ray Zhang, Robert Stone, Roshani Nagmote, Sam Skalicky, Sandeep
Krishnamurthy, Sebastian Bodenstein, Sergey Kolychev, Sergey Sokolov, Sheng
Zha, Shen Zhu, Sheng-Ying, Shuai Zheng, slitsey, Simon, Sina Afrooze, Soji
Adeshina, solin319, Soonhwan-Kwon, starimpact, Steffen Rochel, Taliesin
Beynon, Tao Lv, Thom Lane, Thomas Delteil, Tianqi Chen, Todd Sundsted, Tong
He, Vandana Kannan, vdantu, Vishaal Kapoor, wangzhe, xcgoner, Wei Wu,
Wen-Yang Chu, Xingjian Shi, Xinyu Chen, yifeim, Yizhi Liu, YouRancestor,
Yuelin Zhang, Yu-Xiang Wang, Yuan Tang, Yuntao Chen, Zach Kimberg, Zhennan
Qin, Zhi Zhang, zhiyuan-huang, Ziyue Huang, Ziyi Mu, Zhuo Zhang.

… and thanks to all of the Apache MXNet community supporters, spreading
knowledge and helping to grow the community!


Thanks!
Apache MXNet (incubating) Team
___

DISCLAIMER:
Apache MXNet (incubating) is an effort undergoing incubation at The
Apache Software Foundation (ASF), sponsored by the name of Apache
Incubator PMC. Incubation is required of all newly accepted
projects until a further review indicates that the
infrastructure, communications, and decision-making process have
stabilized in a manner consistent with other successful ASF
projects. While incubation status is not necessarily a reflection
of the completeness or stability of the code, it does indicate
that the project has yet to be fully endorsed by the ASF.


Re: [ANNOUNCE] Apache MXNet (incubating) 1.3.0 Release

2018-09-19 Thread Sheng Zha
Thanks, Sergio. Yes, I'm on it. It was due to the download link not
conforming to the requirement. I will fix and resend.

-sz

On Wed, Sep 19, 2018 at 12:07 PM Sergio Fernández  wrote:

> Zha, you should check you have permissions to post to annou...@apache.org,
> because I don't think you announcement made it through:
> https://lists.apache.org/list.html?annou...@apache.org:lte=1M:mxnet
>
> [image: Screen Shot 2018-09-19 at 12.05.14 PM.png]
>
> On Mon, Sep 17, 2018 at 3:51 PM Sheng Zha  wrote:
>
>> Hello all,
>>
>> The Apache MXNet (incubating) Community announces the availability of
>> Apache MXNet (incubating) 1.3.0!
>>
>> Release blog post:
>> https://blogs.apache.org/mxnet/entry/announcing-apache-mxnet-incubating-1
>> https://medium.com/apache-mxnet/announcing-apache-mxnet-1-3-0-484ea78c22ad
>>
>> Apache MXNet (incubating) is a deep learning framework designed for
>> both efficiency and flexibility. It allows you to mix symbolic and
>> imperative programming to maximize efficiency and productivity.
>>
>> This release improves usability, performance, and interoperability.
>>
>> A full list of the changes in this release can be found in the release
>> notes:
>>
>> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Notes
>>
>> A Link to the Download is here:
>> https://www.apache.org/dyn/closer.cgi/incubator/mxnet/1.3.0
>>
>> If you prefer to build from source and experiment with various
>> compile-time configuration options, use this link to get the
>> instructions:
>> http://mxnet.incubator.apache.org/install/index.html
>>
>> Or You can download and play with MXNet easily using one of the options
>> below:
>>1. The Pip packages can be found here:
>> https://pypi.python.org/pypi/mxnet
>>2. The Docker Images can be found here:
>> https://hub.docker.com/r/mxnet/python/
>>
>> Links in Maven to the published Scala packages:
>>
>> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/
>> https://repository.apache.org/#nexus-search;quick~org.apache.mxnet
>>
>> and to the experimental Clojure packages:
>>
>> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/contrib/clojure/
>>
>> The release tag used for the 1.3.0 release is:
>> https://github.com/apache/incubator-mxnet/tree/1.3.0
>>
>> Some more MXNet Resources:
>>1. Issues: https://github.com/apache/incubator-mxnet/issues
>>2. Wiki: https://cwiki.apache.org/confluence/display/MXNET
>>
>>
>> If you want to learn more about MXNet visit
>> http://mxnet.incubator.apache.org/
>>
>> Finally, you are welcome to join and also invite your friends to the
>> dynamic and growing MXNet community by subscribing to
>> dev@mxnet.incubator.apache.org
>>
>>
>> Acknowledgments:
>> We would like to thank everyone who contributed to the 1.3.0 release:
>>
>> Aaron Markham, Abhinav Sharma, access2rohit, Alex Li, Alexander
>> Alexandrov,
>> Alexander Zai, Amol Lele, Andrew Ayres, Anirudh Acharya, Anirudh
>> Subramanian, Ankit Khedia, Anton Chernov, aplikaplik, Arunkumar V Ramanan,
>> Asmus Hetzel, Aston Zhang, bl0, Ben Kamphaus, brli, Burin Choomnuan,
>> Burness Duan, Caenorst, Cliff Woolley, Carin Meier, cclauss, Carl Tsai,
>> Chance Bair, chinakook, Chudong Tian, ciyong, ctcyang, Da Zheng, Dang
>> Trung
>> Kien, Deokjae Lee, Dick Carter, Didier A., Eric Junyuan Xie, Faldict,
>> Felix
>> Hieber, Francisco Facioni, Frank Liu, Gnanesh, Hagay Lupesko, Haibin Lin,
>> Hang Zhang, Hao Jin, Hao Li, Haozhi Qi, hasanmua, Hu Shiwen, Huilin Qu,
>> Indhu Bharathi, Istvan Fehervari, JackieWu, Jake Lee, James MacGlashan,
>> jeremiedb, Jerry Zhang, Jian Guo, Jin Huang, jimdunn, Jingbei Li, Jun Wu,
>> Kalyanee Chendke, Kellen Sunderland, Kovas Boguta, kpmurali, Kurman
>> Karabukaev, Lai Wei, Leonard Lausen, luobao-intel, Junru Shao, Lianmin
>> Zheng, Lin Yuan, lufenamazon, Marco de Abreu, Marek Kolodziej, Manu Seth,
>> Matthew Brookhart, Milan Desai, Mingkun Huang, miteshyh, Mu Li, Nan Zhu,
>> Naveen Swamy, Nehal J Wani, PatricZhao, Paul Stadig, Pedro Larroy,
>> perdasilva, Philip Hyunsu Cho, Pishen Tsai, Piyush Ghai, Pracheer Gupta,
>> Przemyslaw Tredak, Qiang Kou, Qing Lan, qiuhan, Rahul Huilgol, Rakesh
>> Vasudevan, Ray Zhang, Robert Stone, Roshani Nagmote, Sam Skalicky, Sandeep
>> Krishnamurthy, Sebastian Bodenstein, Sergey Kolychev, Sergey Sokolov,
>> Sheng
>> Zha, Shen Zhu, Sheng-Ying, Shuai Zheng, slitsey, Simon, Sina Afrooze, Soji
&g

[ANNOUNCE] Apache MXNet (incubating) 1.3.0 Release

2018-09-17 Thread Sheng Zha
Hello all,

The Apache MXNet (incubating) Community announces the availability of
Apache MXNet (incubating) 1.3.0!

Release blog post:
https://blogs.apache.org/mxnet/entry/announcing-apache-mxnet-incubating-1
https://medium.com/apache-mxnet/announcing-apache-mxnet-1-3-0-484ea78c22ad

Apache MXNet (incubating) is a deep learning framework designed for
both efficiency and flexibility. It allows you to mix symbolic and
imperative programming to maximize efficiency and productivity.

This release improves usability, performance, and interoperability.

A full list of the changes in this release can be found in the release
notes:
https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Notes

A Link to the Download is here:
https://www.apache.org/dyn/closer.cgi/incubator/mxnet/1.3.0

If you prefer to build from source and experiment with various
compile-time configuration options, use this link to get the
instructions:
http://mxnet.incubator.apache.org/install/index.html

Or You can download and play with MXNet easily using one of the options
below:
   1. The Pip packages can be found here: https://pypi.python.org/pypi/mxnet
   2. The Docker Images can be found here:
https://hub.docker.com/r/mxnet/python/

Links in Maven to the published Scala packages:
https://repository.apache.org/content/repositories/releases/org/apache/mxnet/
https://repository.apache.org/#nexus-search;quick~org.apache.mxnet

and to the experimental Clojure packages:
https://repository.apache.org/content/repositories/releases/org/apache/mxnet/contrib/clojure/

The release tag used for the 1.3.0 release is:
https://github.com/apache/incubator-mxnet/tree/1.3.0

Some more MXNet Resources:
   1. Issues: https://github.com/apache/incubator-mxnet/issues
   2. Wiki: https://cwiki.apache.org/confluence/display/MXNET


If you want to learn more about MXNet visit
http://mxnet.incubator.apache.org/

Finally, you are welcome to join and also invite your friends to the
dynamic and growing MXNet community by subscribing to
dev@mxnet.incubator.apache.org


Acknowledgments:
We would like to thank everyone who contributed to the 1.3.0 release:

Aaron Markham, Abhinav Sharma, access2rohit, Alex Li, Alexander Alexandrov,
Alexander Zai, Amol Lele, Andrew Ayres, Anirudh Acharya, Anirudh
Subramanian, Ankit Khedia, Anton Chernov, aplikaplik, Arunkumar V Ramanan,
Asmus Hetzel, Aston Zhang, bl0, Ben Kamphaus, brli, Burin Choomnuan,
Burness Duan, Caenorst, Cliff Woolley, Carin Meier, cclauss, Carl Tsai,
Chance Bair, chinakook, Chudong Tian, ciyong, ctcyang, Da Zheng, Dang Trung
Kien, Deokjae Lee, Dick Carter, Didier A., Eric Junyuan Xie, Faldict, Felix
Hieber, Francisco Facioni, Frank Liu, Gnanesh, Hagay Lupesko, Haibin Lin,
Hang Zhang, Hao Jin, Hao Li, Haozhi Qi, hasanmua, Hu Shiwen, Huilin Qu,
Indhu Bharathi, Istvan Fehervari, JackieWu, Jake Lee, James MacGlashan,
jeremiedb, Jerry Zhang, Jian Guo, Jin Huang, jimdunn, Jingbei Li, Jun Wu,
Kalyanee Chendke, Kellen Sunderland, Kovas Boguta, kpmurali, Kurman
Karabukaev, Lai Wei, Leonard Lausen, luobao-intel, Junru Shao, Lianmin
Zheng, Lin Yuan, lufenamazon, Marco de Abreu, Marek Kolodziej, Manu Seth,
Matthew Brookhart, Milan Desai, Mingkun Huang, miteshyh, Mu Li, Nan Zhu,
Naveen Swamy, Nehal J Wani, PatricZhao, Paul Stadig, Pedro Larroy,
perdasilva, Philip Hyunsu Cho, Pishen Tsai, Piyush Ghai, Pracheer Gupta,
Przemyslaw Tredak, Qiang Kou, Qing Lan, qiuhan, Rahul Huilgol, Rakesh
Vasudevan, Ray Zhang, Robert Stone, Roshani Nagmote, Sam Skalicky, Sandeep
Krishnamurthy, Sebastian Bodenstein, Sergey Kolychev, Sergey Sokolov, Sheng
Zha, Shen Zhu, Sheng-Ying, Shuai Zheng, slitsey, Simon, Sina Afrooze, Soji
Adeshina, solin319, Soonhwan-Kwon, starimpact, Steffen Rochel, Taliesin
Beynon, Tao Lv, Thom Lane, Thomas Delteil, Tianqi Chen, Todd Sundsted, Tong
He, Vandana Kannan, vdantu, Vishaal Kapoor, wangzhe, xcgoner, Wei Wu,
Wen-Yang Chu, Xingjian Shi, Xinyu Chen, yifeim, Yizhi Liu, YouRancestor,
Yuelin Zhang, Yu-Xiang Wang, Yuan Tang, Yuntao Chen, Zach Kimberg, Zhennan
Qin, Zhi Zhang, zhiyuan-huang, Ziyue Huang, Ziyi Mu, Zhuo Zhang.

… and thanks to all of the Apache MXNet community supporters, spreading
knowledge and helping to grow the community!


Thanks!
Apache MXNet (incubating) Team
___

DISCLAIMER:
Apache MXNet (incubating) is an effort undergoing incubation at The
Apache Software Foundation (ASF), sponsored by the name of Apache
Incubator PMC. Incubation is required of all newly accepted
projects until a further review indicates that the
infrastructure, communications, and decision-making process have
stabilized in a manner consistent with other successful ASF
projects. While incubation status is not necessarily a reflection
of the completeness or stability of the code, it does indicate
that the project has yet to be fully endorsed by the ASF.


Re: [VOTE] Release MXNet version 1.3.0.RC0

2018-09-04 Thread Sheng Zha
Thanks for sharing your opinions, Thomas. Your recognition and respect of
people's efforts on preparing the release candidate are certainly
appreciated.

Now that the vote is set to fail thanks to the veto, there will be plenty
of opportunities to include those bug fixes, including the one Zhi
mentioned [1], which was already merged in the master and yet chose not to
block this release with [2]. I will be happy to work with Roshani to
prepare another release candidate once ready.

-sz

[1]
https://lists.apache.org/thread.html/f02e952bec22c82cb00a6741390a78f55373311c97464997bb455a6c@%3Cdev.mxnet.apache.org%3E
[2]
https://lists.apache.org/thread.html/85d3fcabb3437ba7f1af455cf69aa13eb3afd1ea1d1f6f891e9c339c@%3Cdev.mxnet.apache.org%3E

On Tue, Sep 4, 2018 at 6:02 PM Thomas DELTEIL 
wrote:

> -0
> (non-binding)
>
> If I may add some nuancing plus a personal data point as one of the users
> commenting in the bug report in question:
>
> - Performance vs. Basic functionality => I don't think high performance
> use-cases and basic functionality are two obviously opposed concepts and
> see no contradiction in Hagay's and Sandeep's statements.
> Float16 support is feature of MXNet that provides more than twice the
> performance of Float32 on supported platforms, hence the high performance
> use-case. The bug is that the basic functionality of reloading a saved
> float16 models is currently broken.
>
> - This bug vs Other bugs => Contrary the vast majority of the 140 open bugs
> that are mentioned above, I would put to Sandeep's credit that this one bug
> has a PR open that provides a fix for it. This would make it a better
> candidate to get included in this release than a bug that has no fix ready
> for it.
>
> - Personal datapoint: I recently did some experimentation with float16 [1]
> and actually coincidentally just published a video on optimizing
> performance for Gluon. Float16 conversion is one of the most, if not the
> most effective way to get performance out of MXNet [2]. I believe there is
> a lot of value in publicizing more its use and hence making sure at least
> the basic support for normal use-cases is present.
>
> Of course this needs to be balanced with the overhead of preparing a new
> release candidate once the fixed is reviewed and merged, which seems to be
> a lengthy and complex process in its own right, and the delay with
> providing the other features present in 1.3 for users that are not running
> off the nightly builds.
>
> All the best,
>
> Thomas
>
> [1] https://github.com/ThomasDelteil/PerformanceTricksMXNetGluon
> [2]
>
> https://www.youtube.com/watch?v=Cqo7FPftNyo=0s=PLkEvNnRk8uVk6U515Pj-jHQUxFC4eDi3m
>
> Le mar. 4 sept. 2018 à 17:11, Sheng Zha  a écrit :
>
> > Sandeep,
> >
> > Thanks for explaining your veto. We have open bugs that impacted a lot
> more
> > than just 3 customers, just by referring to the number of commenters on
> the
> > issue [1].
> >
> > You said that this is for "high performance use cases", which contradicts
> > with Hagay's assement that this is "basic functionality broken". Given
> that
> > this is for advanced use cases of using half-precision training, why is
> it
> > so much more important than any other open bug reports, that for this
> > specific bug fix, we have to delay the access of regular users to the new
> > MXNet 1.3 release by at least another week?
> >
> > Honestly, I'm concerned that your vote is biased by Amazon involvement,
> > given that you quoted Amazon Rekognition.
> >
> > -sz
> >
> > [1]
> >
> >
> https://github.com/apache/incubator-mxnet/issues?q=is%3Aissue+is%3Aopen+label%3ABug+sort%3Acomments-desc
> >
> > On Tue, Sep 4, 2018 at 4:51 PM sandeep krishnamurthy <
> > sandeep.krishn...@gmail.com> wrote:
> >
> > > My initial vote of “-0” was due to lack of info from a user who had
> said,
> > > he overcame this issue for FP16 model.
> > >
> > >
> > > However, suggested workaround [1] for the issue is not straight forward
> > and
> > > generally usable for all users. Also, issue is not simple and isolated
> to
> > > be listed in the Release Notes as known issue with a workaround.
> > >
> > >
> > > Changing my vote to: "-1 (binding)" owing to the user impact [3]
> > >
> > >
> > >
> > > @Sheng:
> > >
> > > 1. Agreed, bug existed from long time. However, FP16 and such
> > optimizations
> > > were added later on. Followed by users [2] using this feature for high
> > > performance use cases. It is not ok to measure severity of the bu

Re: [VOTE] Release MXNet version 1.3.0.RC0

2018-09-04 Thread Sheng Zha
Sandeep,

Thanks for explaining your veto. We have open bugs that impacted a lot more
than just 3 customers, just by referring to the number of commenters on the
issue [1].

You said that this is for "high performance use cases", which contradicts
with Hagay's assement that this is "basic functionality broken". Given that
this is for advanced use cases of using half-precision training, why is it
so much more important than any other open bug reports, that for this
specific bug fix, we have to delay the access of regular users to the new
MXNet 1.3 release by at least another week?

Honestly, I'm concerned that your vote is biased by Amazon involvement,
given that you quoted Amazon Rekognition.

-sz

[1]
https://github.com/apache/incubator-mxnet/issues?q=is%3Aissue+is%3Aopen+label%3ABug+sort%3Acomments-desc

On Tue, Sep 4, 2018 at 4:51 PM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> My initial vote of “-0” was due to lack of info from a user who had said,
> he overcame this issue for FP16 model.
>
>
> However, suggested workaround [1] for the issue is not straight forward and
> generally usable for all users. Also, issue is not simple and isolated to
> be listed in the Release Notes as known issue with a workaround.
>
>
> Changing my vote to: "-1 (binding)" owing to the user impact [3]
>
>
>
> @Sheng:
>
> 1. Agreed, bug existed from long time. However, FP16 and such optimizations
> were added later on. Followed by users [2] using this feature for high
> performance use cases. It is not ok to measure severity of the bug based on
> its past existence, rather we can see who is impacted now and is it a small
> subset with a simple workaround or large user impacting issue.
>
> 2. Agreed bug was reported 7/21. However, I became aware of this issue on
> 08/29 and submitted the fix on 08/30. Also, I did bring this to the notice
> of community, you and 1.3 release manager (Roshani) on the RC0 proposal
> thread. Also, I would focus on the issue and user impact than who
> identified and who is fixing the issue.
>
>
> Based on my discussion with 2 users, I think it is a important feature for
> them to see in Apache MXNet v1.3.0.
>
>
>
> Best,
>
> Sandeep
>
>
> [1] Workaround used by the user.
>
>
> net_fp16 = mx.gluon.SymbolBlock.imports('resnet34_fp16-symbol.json',
> ['data'])
>
> params_fp16 = mx.nd.load('resnet34_fp16-.params')
>
>
> for k, v in params_fp16.items():
>
> new_key = k.split(':')[1]
>
> net_fp16.collect_params()[new_key].cast(v.dtype)
>
>
> net_fp16.collect_params().load('resnet34_fp16-.params', ctx)
>
>
> [2] Amazon Rekognition
>
>
> [3] User story: Train a model -> Cast it to FP16 -> Save the model -> Load
> back the model does not work. They have to cast every parameter with a
> workaround mentioned above [1].
>
> On Tue, Sep 4, 2018 at 4:14 PM Hagay Lupesko  wrote:
>
> > Hi Sheng,
> >
> > Addressing your questions:
> >
> > - "why this specific bug is more important than all the other known bugs,
> > that this becomes a release blocker"
> > I do not consider it to be more or less important than other fixes. It
> can
> > be fixed and included in the release alongside the rest of the release
> > content, right?
> > From the description of the issue it seems important since it is blocking
> > users from loading models that were previously trained and saved. There
> is
> > nothing stopping the community from including this fix into 1.3.0,
> > alongside the rest of the features and fixes.
> >
> > - "The bug exists since SymbolBlock was introduced a year ago and has
> > survived at least three releases, so this is not a regression."
> > I do not think I said it is a regression. However, the fact a bug existed
> > before, does not mean it is OK to release it rather than fix it.
> >
> > - "Timeline-wise, this bug was reported on 7/21, but was not reported as
> > release-blocker in the release discussion thread until 8/31 [1]. Neither
> > its reporting as release-blocker nor its fix made it for the 8/3 code
> > freeze."
> > You are right, would have been better to have this identified and fixed
> > earlier and included before code freeze.
> >
> > - "The PR is still not ready yet as it doesn't have approval."
> > I think it is waiting for your review.
> >
> > - "it would be great if you could provide some additional reasoning
> besides
> > "X mentions the issue" or "fix was done by X""
> > I have. Repeating what I wrote in my previous email for clarity: Basic
> &

Re: [VOTE] Release MXNet version 1.3.0.RC0

2018-09-04 Thread Sheng Zha
Hi Hagay,

You asked, "It can be fixed and included in the release alongside the rest
of the release content, right?"

Yes, it can, after it has appropriate approval and merged to master, and at
the cost of restarting the vote.

However, personally, I do not think there's enough justification for this
patch to stop the release, given that:
1. this is not a regression, so 1.3 is not in a worse shape than any prior
releases, in the area that this patch addresses.
2. the attempt of putting in this patch does not respect the code freeze
time that the community agrees.
3. we are not stopping this issue for any of the other 139 open bug reports
[1] and you did not provide an argument that fixing this bug is more
important than fixing any of those 139 bugs.

Finally, your first claiming that the fix "is ready to be cherry picked
into the release branch" when it's not, and then moving on to "I think it
is waiting for your review", this flow makes me uncomfortable. If you'd
like to imply that I'm blocking the merge of that patch, I'm not. As you
may not realize, I have other work to do as many committers do. Given your
status as an engineering lead at Amazon, you can probably get immediate
help if you ask the committers on your team.

[1]
https://github.com/apache/incubator-mxnet/issues?page=2=is%3Aissue+is%3Aopen+label%3ABug

On Tue, Sep 4, 2018 at 4:14 PM Hagay Lupesko  wrote:

> Hi Sheng,
>
> Addressing your questions:
>
> - "why this specific bug is more important than all the other known bugs,
> that this becomes a release blocker"
> I do not consider it to be more or less important than other fixes. It can
> be fixed and included in the release alongside the rest of the release
> content, right?
> From the description of the issue it seems important since it is blocking
> users from loading models that were previously trained and saved. There is
> nothing stopping the community from including this fix into 1.3.0,
> alongside the rest of the features and fixes.
>
> - "The bug exists since SymbolBlock was introduced a year ago and has
> survived at least three releases, so this is not a regression."
> I do not think I said it is a regression. However, the fact a bug existed
> before, does not mean it is OK to release it rather than fix it.
>
> - "Timeline-wise, this bug was reported on 7/21, but was not reported as
> release-blocker in the release discussion thread until 8/31 [1]. Neither
> its reporting as release-blocker nor its fix made it for the 8/3 code
> freeze."
> You are right, would have been better to have this identified and fixed
> earlier and included before code freeze.
>
> - "The PR is still not ready yet as it doesn't have approval."
> I think it is waiting for your review.
>
> - "it would be great if you could provide some additional reasoning besides
> "X mentions the issue" or "fix was done by X""
> I have. Repeating what I wrote in my previous email for clarity: Basic
> functionality broken: loading a model (albeit one that that was saved as
> non FP32)
>
> So, yes - this issue seems to have been out there for a while, somehow went
> under the radar... but I think the key question is whether this blocks a
> basic functionality in MXNet. I believe so, hence my -1 vote.
>
> Hagay
>
> On Tue, Sep 4, 2018 at 1:19 PM Sheng Zha  wrote:
>
> > Hi Hagay and Sandeep,
> >
> > Could you help us understand why this specific bug is more important than
> > all the other known bugs, that this becomes a release blocker?
> >
> > Some facts to consider:
> > - The bug exists since SymbolBlock was introduced a year ago and has
> > survived at least three releases, so this is not a regression.
> > - Timeline-wise, this bug was reported on 7/21, but was not reported as
> > release-blocker in the release discussion thread until 8/31 [1]. Neither
> > its reporting as release-blocker nor its fix made it for the 8/3 code
> > freeze.
> > - The PR is still not ready yet as it doesn't have approval.
> >
> > Hagay, it would be great if you could provide some additional reasoning
> > besides "X mentions the issue" or "fix was done by X". Thanks.
> >
> > -sz
> >
> > [1]
> >
> >
> https://lists.apache.org/thread.html/d1ed611f98c20d5d85c294b0c07c8bdebca13a209cf66a3872c9123e@%3Cdev.mxnet.apache.org%3E
> >
> > On Tue, Sep 4, 2018 at 12:39 PM Hagay Lupesko  wrote:
> >
> > > Sandeep mentions the issue of an error when user tries to load model
> > params
> > > trained/saved as FP16.
> > > https://github.com/apache/incubator-mxnet/issues/11849
> > > The fix was done by Sandeep:
> > > 

Re: [VOTE] Release MXNet version 1.3.0.RC0

2018-09-04 Thread Sheng Zha
Hi Hagay and Sandeep,

Could you help us understand why this specific bug is more important than
all the other known bugs, that this becomes a release blocker?

Some facts to consider:
- The bug exists since SymbolBlock was introduced a year ago and has
survived at least three releases, so this is not a regression.
- Timeline-wise, this bug was reported on 7/21, but was not reported as
release-blocker in the release discussion thread until 8/31 [1]. Neither
its reporting as release-blocker nor its fix made it for the 8/3 code
freeze.
- The PR is still not ready yet as it doesn't have approval.

Hagay, it would be great if you could provide some additional reasoning
besides "X mentions the issue" or "fix was done by X". Thanks.

-sz

[1]
https://lists.apache.org/thread.html/d1ed611f98c20d5d85c294b0c07c8bdebca13a209cf66a3872c9123e@%3Cdev.mxnet.apache.org%3E

On Tue, Sep 4, 2018 at 12:39 PM Hagay Lupesko  wrote:

> Sandeep mentions the issue of an error when user tries to load model params
> trained/saved as FP16.
> https://github.com/apache/incubator-mxnet/issues/11849
> The fix was done by Sandeep:
> https://github.com/apache/incubator-mxnet/pull/12412 and is ready to be
> cherry picked into the release branch.
>
> This seems like a release blocker to me:
> - Basic functionality broken: loading a model (albeit one that that was
> saved as non FP32)
> - Reported by 3 users (wgchang@, nicklhy@ and ThomasDelteil@)
>
> -1 (non binding)
>
> Hagay
>
>
>
> On Tue, Sep 4, 2018 at 12:01 PM sandeep krishnamurthy <
> sandeep.krishn...@gmail.com> wrote:
>
> > "- 0"
> >
> > I believe the bug #11849
> > , unable to
> import
> > non-fp32 models into Gluon, fixed in this PR #12412
> >  is important for
> > the
> > users. I would rather pick this fix in this release than plan a minor
> > release later.
> >
> > Best,
> > Sandeep
> >
> >
> >
> > On Mon, Sep 3, 2018 at 2:34 PM Philip Cho 
> > wrote:
> >
> > > Actually, the command "git clone --recursive
> > > https://github.com/apache/incubator-mxnet -b 1.3.0.rc0" works fine
> now,
> > > never mind.
> > >
> > > On Mon, Sep 3, 2018 at 1:45 PM Philip Cho 
> > > wrote:
> > >
> > > > Unfortunately, MXNet was depending on a branch of TVM that is now
> > > deleted.
> > > > We will have to merge #12448
> > > >  before the
> > > release.
> > > >
> > > > Background: See dmlc/tvm#1394 <
> https://github.com/dmlc/tvm/issues/1394
> > >.
> > > >
> > > > Philip.
> > > >
> > > > On Mon, Sep 3, 2018 at 7:26 AM Carin Meier 
> > wrote:
> > > >
> > > >> Checked out the tag, built and tested the Clojure package. +1
> > > >>
> > > >> On Fri, Aug 31, 2018 at 10:59 PM Roshani Nagmote <
> > > >> roshaninagmo...@gmail.com>
> > > >> wrote:
> > > >>
> > > >> > Hi all,
> > > >> >
> > > >> > I would like to propose a vote to release Apache MXNet
> (incubating)
> > > >> version
> > > >> > 1.3.0.RC0. Voting will start now (Friday, Aug 31st) and end at
> 7:00
> > PM
> > > >> > PDT, Wednesday, Sept 5th.
> > > >> >
> > > >> > Link to release notes:
> > > >> > https://github.com/apache/incubator-mxnet/releases
> > > >> >
> > > >> > Link to release candidate 1.3.0.rc0:
> > > >> > *https://github.com/apache/incubator-mxnet/releases/tag/1.3.0.rc
> > > >> >  > >0*
> > > >> >
> > > >> > View this page, click on "Build from Source", and use the source
> > code
> > > >> > obtained from 1.3.0.rc0 tag:
> > > >> > https://mxnet.incubator.apache.org/install/index.html
> > > >> >
> > > >> > Please remember to TEST first before voting accordingly:
> > > >> >
> > > >> > +1 = approve
> > > >> > +0 = no opinion
> > > >> > -1 = disapprove (provide reason)
> > > >> >
> > > >> > Thanks,
> > > >> > Roshani
> > > >> >
> > > >>
> > > >
> > >
> >
> >
> > --
> > Sandeep Krishnamurthy
> >
>


Re: [VOTE] Release MXNet version 1.3.0.RC0

2018-09-02 Thread Sheng Zha
Hi Steffen and Zhi,

That's because those are not the artifacts being voted on. I just uploaded the 
actual release artifact to [1]. Unfortunately, even the lengthy release process 
doc [2] didn't capture this step...

Steffen,

In case you don't already know, regarding the version string, since we cannot 
change the code after the vote passes, the version never says it's a release 
candidate, only the file name does. None of the previous releases follow the 
convention you suggested. Please adjust your expectation and vote again. Feel 
free to download previous releases and verify:

% tar -zxf apache-mxnet-src-1.2.1.rc1-incubating.tar.gz -O 
apache-mxnet-src-1.2.1.rc1-incubating/python/mxnet/libinfo.py | grep 
'__version__'
__version__ = "1.2.1"

Zhi,

We are not accepting new patches after the announced cutoff time. If you think 
this patch is optional and you see no other issue with this release, consider 
changing your vote. If you think the patch is critical, feel free to sustain 
your -1 vote until the end of this voting cycle.

[1] 
https://github.com/apache/incubator-mxnet/releases/download/1.3.0.rc0/apache-mxnet-src-1.3.0.rc0-incubating.tar.gz
[2] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=73630468

-sz

On 2018/09/01 22:08:48, "Joshua Z. Zhang"  wrote: 
> -1. Please include all 3rd party dependencies, GitHub won’t automatically do 
> that. 
> 
> BTW, Per user request in forum, I found this 
> PR(https://github.com/apache/incubator-mxnet/pull/12118 
> ) is not included in 
> 1.3 rc0, I recommend to cherry-pick into release to avoid potential problems. 
> 
> Best,
> Zhi
> > On Sep 1, 2018, at 2:27 PM, Steffen Rochel  wrote:
> > 
> > -1
> > 
> > https://github.com/apache/incubator-mxnet/archive/1.3.0.rc0.zip and
> > https://github.com/apache/incubator-mxnet/archive/1.3.0.rc0.tar.gz do not
> > contain 3rdparty packages, resulting in make failure:
> > tar zxf incubator-mxnet-1.3.0.rc0.tar.gz
> > cd incubator-mxnet-1.3.0.rc0/
> > make USE_OPENCV=1 USE_BLAS=openblas
> > Makefile:74:
> > /home/steffen/Downloads/incubator-mxnet-1.3.0.rc0/3rdparty/mshadow/make/
> > mshadow.mk: No such file or directory
> > Makefile:75:
> > /home/steffen/Downloads/incubator-mxnet-1.3.0.rc0/3rdparty/dmlc-core/make/
> > dmlc.mk: No such file or directory
> > Makefile:176: "USE_LAPACK disabled because libraries were not found"
> > Makefile:284: WARNING: Significant performance increases can be achieved by
> > installing and enabling gperftools or jemalloc development packages
> > Makefile:355:
> > /home/steffen/Downloads/incubator-mxnet-1.3.0.rc0/3rdparty/ps-lite/make/
> > ps.mk: No such file or directory
> > make: *** No rule to make target
> > '/home/steffen/Downloads/incubator-mxnet-1.3.0.rc0/3rdparty/ps-lite/make/
> > ps.mk'.  Stop.
> > 
> > ~/Downloads/incubator-mxnet-1.3.0.rc0/3rdparty$ ls -al *
> > cub:
> > total 8
> > drwxr-xr-x  2 steffen steffen 4096 Aug 29 10:07 .
> > drwxr-xr-x 12 steffen steffen 4096 Aug 29 10:07 ..
> > 
> > dlpack:
> > total 8
> > drwxr-xr-x  2 steffen steffen 4096 Aug 29 10:07 .
> > drwxr-xr-x 12 steffen steffen 4096 Aug 29 10:07 ..
> > 
> > dmlc-core:
> > total 8
> > drwxr-xr-x  2 steffen steffen 4096 Aug 29 10:07 .
> > drwxr-xr-x 12 steffen steffen 4096 Aug 29 10:07 ..
> > 
> > Environment:
> > uname -a
> > Linux steffen 4.15.0-33-generic #36-Ubuntu SMP Wed Aug 15 16:00:05 UTC 2018
> > x86_64 x86_64 x86_64 GNU/L
> > 
> > Build from git succeeded:
> > git clone --recursive https://github.com/apache/incubator-mxnet --branch
> > 1.3.0.rc0
> > cd incubator-mxnet/
> > git checkout 1.3.0.rc0
> > make USE_OPENCV=1 USE_BLAS=openblas
> > cd python/
> > sudo pip install -e .
> > 
>  import mxnet as mx
>  print(mx.__version__)
> > 1.3.0
> > 
> > I was expecting version to be 1.3.0.rc0
> > 
> > Steffen
> > 
> > 
> > 
> > On Sat, Sep 1, 2018 at 3:22 AM Pigeon Lucky  wrote:
> > 
> >> +1
> >> 
> >> On Sat, 1 Sep 2018, 10:59 Roshani Nagmote, 
> >> wrote:
> >> 
> >>> Hi all,
> >>> 
> >>> I would like to propose a vote to release Apache MXNet (incubating)
> >> version
> >>> 1.3.0.RC0. Voting will start now (Friday, Aug 31st) and end at 7:00 PM
> >>> PDT, Wednesday, Sept 5th.
> >>> 
> >>> Link to release notes:
> >>> https://github.com/apache/incubator-mxnet/releases
> >>> 
> >>> Link to release candidate 1.3.0.rc0:
> >>> *https://github.com/apache/incubator-mxnet/releases/tag/1.3.0.rc
> >>> 0*
> >>> 
> >>> View this page, click on "Build from Source", and use the source code
> >>> obtained from 1.3.0.rc0 tag:
> >>> https://mxnet.incubator.apache.org/install/index.html
> >>> 
> >>> Please remember to TEST first before voting accordingly:
> >>> 
> >>> +1 = approve
> >>> +0 = no opinion
> >>> -1 = disapprove (provide reason)
> >>> 
> >>> Thanks,
> >>> Roshani
> >>> 
> >> 
> 
> 


Re: Nightly Builds Not Working for Cu90MKL?

2018-08-31 Thread Sheng Zha
Hi Alfredo,

Looks like the recent increase in binary size is causing timeouts when 
publishing. I'm looking into it. In the meantime, please build from the source 
until it's resolved. Sorry for the inconvenience.

-sz

On 2018/08/31 21:29:05, Alfredo Luque  wrote: 
> See here:
> https://pypi.org/project/mxnet-cu90mkl/#history
> 
> No builds show up since 8/22. From what I can tell, other variants (eg;
> mxnet-mkl) are up to date.
> 
> On August 31, 2018 at 2:24:30 PM, Anton Chernov (mecher...@gmail.com) wrote:
> 
> Hi Alfredo!
> 
> Could you provide more info on this? Where do you get the information?
> 
> Best
> Anton
> 
> пт, 31 авг. 2018 г. в 22:49, Alfredo Luque  >:
> 
> > Just curious why the latest build is 2018-08-22 while the other variants
> > are up to date.
> >
> > Thanks,
> >
> > —
> > Alfredo Luque
> > Software Engineer
> > Machine Learning Infrastructure
> > Airbnb
> > San Francisco, CA
> >
> 
> —
> Alfredo Luque
> Software Engineer
> Machine Learning Infrastructure
> Airbnb
> San Francisco, CA
> 


Re: Propose to discontinue supporting Apache MXNet on Windows 7

2018-08-30 Thread Sheng Zha
Hi Kellen,

Thanks for the explanation. Unfortunately, I don't have the usage data, so I 
refrained from voting. If any of the voters have such data I'd love to see it 
too.

-sz

On 2018/08/30 14:58:09, kellen sunderland  wrote: 
> I haven't spoken to anyone about the decision (as I'm currently on an
> island in the med) but to me the quick +1s are likely a result of this
> being a fairly straightforward decision.  The factors that went into my
> thinking were (1) prioritizing growing platforms rather than shrinking
> platforms (i.e. thinking long term rather than shirt term) and (2)  earning
> our customers' trust.  Claiming support for a platform when we can't
> realistically deliver it would lose us trust.  I'd prefer to over deliver
> and under promise when it come to windows 7 for this reason.
> 
> Now on the flip side one thing I would see as valuable is to try and get
> windows builds working with clang.  This could be beneficial in the sense
> that it would be easy to maintain for mxnet devs and allow us to use modern
> cpp on older windows machines without using vs 2013(which I consider a
> non-starter with our codebase).
> 
> You have peaked my curiousity though Sheng.  How many win7 users does MXNet
> have relative to macos/Linux?
> 
> On Thu, Aug 30, 2018, 8:51 AM Sheng Zha  wrote:
> 
> > Hi Yuan,
> >
> > No problem. This is an issue that's worth having a clear definition, so
> > there's nothing wrong about your proposal, and thanks for bringing this up.
> >
> > I'm more concerned about the seemingly unanimous votes on dropping support
> > on a platform without seeing the supporting evidence that it's the right
> > thing. It is as if everyone who participated in the vote are already on the
> > same page, and somehow I'm the only one that's not. But the only argument I
> > hear so far is that it's technically not straightforward to continue the
> > support, which, coming from Amazon folks, certainly doesn't sound
> > customer-obsessed.
> >
> > -sz
> >
> > On Wed, Aug 29, 2018 at 11:37 PM Lin Yuan  wrote:
> >
> > > Hi Sheng,
> > >
> > > Thanks for raising this concern. The problem now is that we cannot even
> > > build MXNet on Windows 7 because the build process requires MS VS 2015 w/
> > > update 3 which is incompatible on Windows 7. This leaves many Windows 7
> > > related open issues on github without any timely response. In my opinion,
> > > having no response to users' request is probably even worse than letting
> > > them know the limitation of OS support.
> > >
> > > To minimize the impact to current Windows 7 users, we can provide PyPi
> > > package for Windows 7 in this release but defer the bug fix and feature
> > > enhancement to later Windows OS version. Based on users' feedbacks, we
> > can
> > > then officially discontinue the Windows 7 support in the next MXNet
> > > release.
> > >
> > > I will appreciate your comments.
> > >
> > > Lin
> > >
> > >
> > >
> > > On Wed, Aug 29, 2018 at 1:37 PM Sheng Zha  wrote:
> > >
> > > > Are any of the votes based on any measure of user impact, if we indeed
> > > > decide not to fix the current problems?
> > > >
> > > > -sz
> > > >
> > > > On Wed, Aug 29, 2018 at 1:29 PM Hagay Lupesko 
> > wrote:
> > > >
> > > > > +1 (non-binding)
> > > > > Thanks for raising this Lin!
> > > > > Are you suggesting to do it as part of MXNet 1.3?
> > > > >
> > > > > On Wed, Aug 29, 2018 at 9:14 AM Srivastava, Rohit Kumar <
> > > > > srivastava@buckeyemail.osu.edu> wrote:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > On 8/29/18, 8:39 AM, "sandeep krishnamurthy" <
> > > > > sandeep.krishn...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > +1 Thanks for bringing this up.
> > > > > >
> > > > > > On Wed, Aug 29, 2018 at 6:38 AM Marco de Abreu
> > > > > >  wrote:
> > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > > On Wed, Aug 29, 2018 at 1:08 PM kellen sunderland <
> > > > > > > kellen.sunderl...@gmail.com> wrote:
> > > > > > >
> > > > > > > > +1 (non-binding)
> > > > > &g

CI Issue in Julia PR #10149

2018-08-30 Thread Sheng Zha
Hi,

This is regarding https://github.com/apache/incubator-mxnet/pull/10149. The
author ran into an mxnet CI issue, and has tried to reach out to the people
who donated the CI system on the PR but didn't get a response. As a result,
the progress on porting the Julia binding is halted for three weeks now. He
reached out to me on Slack to get help. Since the CI was donated by Amazon,
I pinged the team at Amazon that is responsible for the CI operations
through other channels 10 days ago but so far the author still hasn't
received a response as of 3am today.

I'm reaching out to the wider community in case the CI issues are being
more closely monitored here and in case others know of better ways to help
Iblis.

-sz


Re: Propose to discontinue supporting Apache MXNet on Windows 7

2018-08-30 Thread Sheng Zha
Hi Yuan,

No problem. This is an issue that's worth having a clear definition, so
there's nothing wrong about your proposal, and thanks for bringing this up.

I'm more concerned about the seemingly unanimous votes on dropping support
on a platform without seeing the supporting evidence that it's the right
thing. It is as if everyone who participated in the vote are already on the
same page, and somehow I'm the only one that's not. But the only argument I
hear so far is that it's technically not straightforward to continue the
support, which, coming from Amazon folks, certainly doesn't sound
customer-obsessed.

-sz

On Wed, Aug 29, 2018 at 11:37 PM Lin Yuan  wrote:

> Hi Sheng,
>
> Thanks for raising this concern. The problem now is that we cannot even
> build MXNet on Windows 7 because the build process requires MS VS 2015 w/
> update 3 which is incompatible on Windows 7. This leaves many Windows 7
> related open issues on github without any timely response. In my opinion,
> having no response to users' request is probably even worse than letting
> them know the limitation of OS support.
>
> To minimize the impact to current Windows 7 users, we can provide PyPi
> package for Windows 7 in this release but defer the bug fix and feature
> enhancement to later Windows OS version. Based on users' feedbacks, we can
> then officially discontinue the Windows 7 support in the next MXNet
> release.
>
> I will appreciate your comments.
>
> Lin
>
>
>
> On Wed, Aug 29, 2018 at 1:37 PM Sheng Zha  wrote:
>
> > Are any of the votes based on any measure of user impact, if we indeed
> > decide not to fix the current problems?
> >
> > -sz
> >
> > On Wed, Aug 29, 2018 at 1:29 PM Hagay Lupesko  wrote:
> >
> > > +1 (non-binding)
> > > Thanks for raising this Lin!
> > > Are you suggesting to do it as part of MXNet 1.3?
> > >
> > > On Wed, Aug 29, 2018 at 9:14 AM Srivastava, Rohit Kumar <
> > > srivastava@buckeyemail.osu.edu> wrote:
> > >
> > > > +1
> > > >
> > > > On 8/29/18, 8:39 AM, "sandeep krishnamurthy" <
> > > sandeep.krishn...@gmail.com>
> > > > wrote:
> > > >
> > > > +1 Thanks for bringing this up.
> > > >
> > > > On Wed, Aug 29, 2018 at 6:38 AM Marco de Abreu
> > > >  wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Wed, Aug 29, 2018 at 1:08 PM kellen sunderland <
> > > > > kellen.sunderl...@gmail.com> wrote:
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > On Wed, Aug 29, 2018, 1:18 AM Anirudh Acharya <
> > > > anirudhk...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > +1 for discontinuing.
> > > > > > >
> > > > > > > On Tue, Aug 28, 2018 at 4:11 PM Naveen Swamy <
> > > mnnav...@gmail.com
> > > > >
> > > > > wrote:
> > > > > > >
> > > > > > > > +1 to stop supporting Win7
> > > > > > > >
> > > > > > > > On Tue, Aug 28, 2018 at 3:54 PM Lin Yuan <
> > > apefor...@gmail.com>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Dear Community,
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Currently, our MXNet installation guide for Windows
> does
> > > not
> > > > work
> > > > > for
> > > > > > > > > Windows 7. e.g. Microsoft Visual Studio 2015 is not
> > > > supported on
> > > > > > > Windows
> > > > > > > > 7
> > > > > > > > > <
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://visualstudio.microsoft.com/vs/support/vs2015/received-error-specified-program-requires-newer-version-windows/
> > > > > > > > > >.
> > > > > > > > > In addition, MSFT ended “Mainstream” support for
> Windows
> > 7
> > > > in 2015
> >

Re: Propose to discontinue supporting Apache MXNet on Windows 7

2018-08-29 Thread Sheng Zha
Are any of the votes based on any measure of user impact, if we indeed
decide not to fix the current problems?

-sz

On Wed, Aug 29, 2018 at 1:29 PM Hagay Lupesko  wrote:

> +1 (non-binding)
> Thanks for raising this Lin!
> Are you suggesting to do it as part of MXNet 1.3?
>
> On Wed, Aug 29, 2018 at 9:14 AM Srivastava, Rohit Kumar <
> srivastava@buckeyemail.osu.edu> wrote:
>
> > +1
> >
> > On 8/29/18, 8:39 AM, "sandeep krishnamurthy" <
> sandeep.krishn...@gmail.com>
> > wrote:
> >
> > +1 Thanks for bringing this up.
> >
> > On Wed, Aug 29, 2018 at 6:38 AM Marco de Abreu
> >  wrote:
> >
> > > +1
> > >
> > > On Wed, Aug 29, 2018 at 1:08 PM kellen sunderland <
> > > kellen.sunderl...@gmail.com> wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > On Wed, Aug 29, 2018, 1:18 AM Anirudh Acharya <
> > anirudhk...@gmail.com>
> > > > wrote:
> > > >
> > > > > +1 for discontinuing.
> > > > >
> > > > > On Tue, Aug 28, 2018 at 4:11 PM Naveen Swamy <
> mnnav...@gmail.com
> > >
> > > wrote:
> > > > >
> > > > > > +1 to stop supporting Win7
> > > > > >
> > > > > > On Tue, Aug 28, 2018 at 3:54 PM Lin Yuan <
> apefor...@gmail.com>
> > > wrote:
> > > > > >
> > > > > > > Dear Community,
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Currently, our MXNet installation guide for Windows does
> not
> > work
> > > for
> > > > > > > Windows 7. e.g. Microsoft Visual Studio 2015 is not
> > supported on
> > > > > Windows
> > > > > > 7
> > > > > > > <
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://visualstudio.microsoft.com/vs/support/vs2015/received-error-specified-program-requires-newer-version-windows/
> > > > > > > >.
> > > > > > > In addition, MSFT ended “Mainstream” support for Windows 7
> > in 2015
> > > (
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://support.microsoft.com/en-us/help/13853/windows-lifecycle-fact-sheet
> > > > > > > ).
> > > > > > > Therefore, it is not possible for developers to build MXNet
> > and
> > > > verify
> > > > > > the
> > > > > > > fix on Windows 7 platform. Given that there have been
> several
> > > issues
> > > > > > about
> > > > > > > MXNet error on Windows 7 (issue#9271
> > > > > > > ,
> > issue
> > > #8921
> > > > > > > ,
> > issue
> > > > #11163
> > > > > > > ),
> > it will
> > > > > even
> > > > > > > add
> > > > > > > more burden on developers in the future if we were to
> > continue
> > > > > supporting
> > > > > > > Windows 7.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > I therefore would like to propose that we discontinue the
> > support
> > > of
> > > > > > MXNet
> > > > > > > on Windows 7 in the next release.
> > > > > > >
> > > > > > >
> > > > > > > Specifically, this means the following required actions:
> > > > > > >
> > > > > > > 1) state the discontinuation of Windows 7 support in the
> > release
> > > note
> > > > > > >
> > > > > > > 2) update the MXNet webpage if Windows version is
> mentioned.
> > > > > > >
> > > > > > > 3) update the open Github issues related to Windows 7
> > > > > > >
> > > > > > >
> > > > > > > Please share your thoughts about this proposal and/or
> > suggest if
> > > > there
> > > > > is
> > > > > > > any other missing action item from the above.
> > > > > > >
> > > > > > >
> > > > > > > Best Regards,
> > > > > > >
> > > > > > >
> > > > > > > Lin
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> > Sandeep Krishnamurthy
> >
> >
> >
>


Re: There is a bug in shape inference of the where operator

2018-08-23 Thread Sheng Zha
Correct me if I'm wrong. The bug went into the release branch so we need to
cherry-pick the fix once the patch is merged to master.

-sz

On Thu, Aug 23, 2018 at 4:14 PM Lin Yuan  wrote:

> Hi Da,
>
> I am currently running the unit test and will check in the fix once it's
> complete.
>
> Thanks,
>
> Lin
>
> On Thu, Aug 23, 2018 at 3:59 PM Zheng, Da 
> wrote:
>
> > Hello all,
> >
> > There is a little bug in shape inference of the where operator.
> Currently,
> > the where operator doesn’t work if the first input is a 1D array. Yuan
> Lin
> > will provide a patch to fix this bug.
> >
> > Best,
> > Da
> >
>


Re: Growing number of open PRs and Labelling PRs

2018-08-08 Thread Sheng Zha
Hi Sandeep,

Sorry if I asked an obvious question, but is it required to introduce a new
solution that require committer access? We have an existing solution to
communicate the PR status, which is through the PR template checklist. The
PR checklist provides clickable options to reflect the PR status. It
doesn't require committer involvement and contributors can directly edit
the description themselves. It shows a progress bar that can be seen
directly in the pull requests page.
https://github.com/apache/incubator-mxnet/pulls

Best regards,
-sz


On Wed, Aug 8, 2018 at 2:44 PM, sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> << Sorry sent too early>>
> Hello Community,
>
> Recently, we are observing a growing number of PR open {pending for review,
> pending for updates, ready to merge but waiting and more}.
>
> Few of us committers (Naveen, Haibin, Anirudh and Me) and contributors
> (Steffen and Hagay) met to discuss on how to improve the process in
> reviewing the PR and allow more people join the review process.
>
> To shed some light on numbers:
>
> *(As of 6-Aug-2018)*
>
>- Total open PRs - 113 - Link
>
>- Total open PRs with No Reviews - 94 - Link
> 3Apr+is%3Aopen+review%3Anone>
>(*Note:* Out of these there are comments for 72 PRs. This count is for
>formally reviewing and approve/request change etc.)
>
>
>- Changes Requested and awaiting contributors to update - 8 - Link
> 3Apr+is%3Aopen+review%3Achanges-requested>
>- Oldest PR - Jan 19, 2018 - PR
>
>
> One important issue observed is, "*Inability to filter the PR based on
> state and component*". For this, one suggested solution is to "*label the
> PRs*" like we label the issues. This will allow community members to filter
> by area of interest, add review, committers can filter by state and take
> necessary action.
>
> In this direction, I have created following 4 new labels.
>
> Please let us know your suggestions, and this is open for feedback and
> changes.
>
>
> -
> pr-awaiting-review
> 
> PR is waiting for code review
>  Edit Delete
> - pr-awaiting-response
> 
> PR is reviewed and waiting for contributor to respond
>  Edit Delete
> - pr-awaiting-testing (then merge)
>  awaiting-testing%20%28then%20merge%29>
> PR is reviewed and waiting CI build and test
>  Edit Delete
> - pr-ready-to-merge
> 
> Review and CI is complete. Ready to Merge
>
>
> On Wed, Aug 8, 2018 at 2:35 PM sandeep krishnamurthy <
> sandeep.krishn...@gmail.com> wrote:
>
> > Hello Community,
> >
> > Recently, we are observing a growing number of PR open {pending for
> > review, pending for updates, ready to merge but waiting and more}.
> >
> > To shed some light on numbers:
> >
> > --
> > Sandeep Krishnamurthy
> >
>
>
> --
> Sandeep Krishnamurthy
>


Re: [DISCUSS] improve MXNet Scala release process

2018-07-27 Thread Sheng Zha
Thanks, Naveen. Once we have clarity on 3), it should be no problem for
scala to reuse the same solution. For 1), if this is indeed an issue, it
seems that we may have rushed a bit on the scala releases. Are there any
user reports?

-sz

On Fri, Jul 27, 2018 at 5:26 PM, Naveen Swamy  wrote:

> I collaborate with Qing as a part of my day time job, to give you a little
> more perspective on the proposed work
>
> For 1)
> What we found is that users often run into conflicts when they use a
> different version of the dependency(CUDA, CUDNN, OpenBLAS, OpenCV, etc,.)
> and the one we build with MXNet backend and use in the MXNet Scala package.
> Also it makes its not very straight-forward for users to install these
> dependencies themselves in order to lower the entry barrier and to make
> everything work out of the box we are thinking to build MXNet all these
> dependencies with MXNet (as a static library) and embed them in the MXNet
> Scala package. This is also inspired by the work you have done for Apache
> MXNet pip packages, Ideally I would like to reuse some of that work.
>
> Maven does not manage the binaries, you still have to build the binary and
> release them but what dependencies the binaries are built with is what
> causes the confusion and problems.
> In the past there were 20+ packages of MXNet and I reduced it to 3 (OSX,
> Linux-CPU, Linux-GPU -- please see the discussion thread below), with the
> latest version of the dependencies and we'll build/manage additional
> packages based on the user demand.
>
> Please see the previous discussion about this topic.
>
> 1) Scala Maven Package discussion:
> https://lists.apache.org/thread.html/c3846515fc5560d826e7b6f47e9b8b
> 6728a925e6f8decb9676456144@%3Cdev.mxnet.apache.org%3E
> 2) Config for Scala Package:
> https://lists.apache.org/thread.html/0be6beb89cc2a792e7ba861a251f9a
> 9a0b81fa36a5a0cd59d9f2cb6f@%3Cdev.mxnet.apache.org%3E
> 3) Current Scala Package Release process:
> https://cwiki.apache.org/confluence/display/MXNET/
> MXNet-Scala+Release+Process
>
>
> Hope that answers
>
>
> On Fri, Jul 27, 2018 at 4:59 PM, Sheng Zha  wrote:
>
> > Qing,
> >
> > For 1, why would it be a blocker, given that there were previous
> releases?
> > Has there been compatibility issues for scala packages? If so, why did we
> > release?
> > There are many maven packages that include binary already, so if we can
> > find the binary for all dependency it's probably best to link to them,
> and
> > let maven manage these dependencies.
> >
> > For 2, since the current CI is based on Jenkins, I imagine it would be
> some
> > sort of Jenkins pipeline? Other people who're better versed in Jenkins
> can
> > chime in.
> >
> > Personally, I'm interested in 3 as well. I have a pipeline for building
> pip
> > packages that's currently not utilizing the CI, and the item 3 is the
> > blocker too. Once you finish, it would be great to refer to the same
> > solution, so that I can move it into the same CI.
> >
> > -sz
> >
> > On Fri, Jul 27, 2018 at 4:37 PM, Qing Lan  wrote:
> >
> > > Hi all,
> > >
> > > Recently contributors on Scala Language development worked together and
> > > finally able to publish Scala package on Maven. Now I would like to
> > raise a
> > > discussion to automate Scala release process and also discover a
> standard
> > > way to release different packages for MXNet so we won’t ask any
> > individuals
> > > to spend a long time to publish the package.
> > >
> > > There are three blocks that stop this automated process:
> > >
> > >   1.  How to build general hardware-compatible backend dependencies for
> > > MXNet (Linux CPU/GPU Mac OSX)
> > >   2.  How to automate the frontend release process and CI integration
> > >   3.  How to keep credentials for the release in the pipeline
> > >
> > > Scala Release process created by Naveen: https://cwiki.apache.org/
> > > confluence/display/MXNET/MXNet-Scala+Release+Process
> > >
> > > Thanks,
> > > Qing
> > >
> > >
> >
>


Re: [DISCUSS] improve MXNet Scala release process

2018-07-27 Thread Sheng Zha
Qing,

For 1, why would it be a blocker, given that there were previous releases?
Has there been compatibility issues for scala packages? If so, why did we
release?
There are many maven packages that include binary already, so if we can
find the binary for all dependency it's probably best to link to them, and
let maven manage these dependencies.

For 2, since the current CI is based on Jenkins, I imagine it would be some
sort of Jenkins pipeline? Other people who're better versed in Jenkins can
chime in.

Personally, I'm interested in 3 as well. I have a pipeline for building pip
packages that's currently not utilizing the CI, and the item 3 is the
blocker too. Once you finish, it would be great to refer to the same
solution, so that I can move it into the same CI.

-sz

On Fri, Jul 27, 2018 at 4:37 PM, Qing Lan  wrote:

> Hi all,
>
> Recently contributors on Scala Language development worked together and
> finally able to publish Scala package on Maven. Now I would like to raise a
> discussion to automate Scala release process and also discover a standard
> way to release different packages for MXNet so we won’t ask any individuals
> to spend a long time to publish the package.
>
> There are three blocks that stop this automated process:
>
>   1.  How to build general hardware-compatible backend dependencies for
> MXNet (Linux CPU/GPU Mac OSX)
>   2.  How to automate the frontend release process and CI integration
>   3.  How to keep credentials for the release in the pipeline
>
> Scala Release process created by Naveen: https://cwiki.apache.org/
> confluence/display/MXNET/MXNet-Scala+Release+Process
>
> Thanks,
> Qing
>
>


Re: Release blocker: non-determinstic forward in gluon

2018-07-27 Thread Sheng Zha
Tong,

That's great news. I'm glad that OpenBLAS people are responding so quickly.
In that case it's probably a better idea to use that version instead. The
latest OpenBLAS version brings many optimization for all kinds of hardware.

-sz

On Fri, Jul 27, 2018 at 11:10 AM, Tong He  wrote:

> Hi Sheng,
>
> I also opened an issue on OpenBLAS repo:
> https://github.com/xianyi/OpenBLAS/issues/1700 .
>
> As informed that "0.3.2 should be released this weekend", I tested their
> develope branch as well, and seems the new version has fixed the bug.
>
> Since OpenBLAS 0.3.2 could also have performance improvement, therefore I
> propose to wait for OpenBLAS 0.3.2 for our pip post release.
>
>
> Best regards,
>
> Tong He
>
> 2018-07-27 10:54 GMT-07:00 Sheng Zha :
>
> > Forgot to mention, the post release version is a pip package version.
> >
> > -sz
> >
> > > On Jul 27, 2018, at 10:42 AM, Sheng Zha  wrote:
> > >
> > > In this case we can regard it as a release problem, which is usually
> > what post release versions are for. It’s still the same release with
> > different dependency, so there is no code change needed.
> > >
> > > -sz
> > >
> > >
> > >> On Jul 27, 2018, at 8:31 AM, Steffen Rochel 
> > wrote:
> > >>
> > >> Hi Tong - thanks for root causing the problem.
> > >> Sheng - what is 1.2.1.post0? Shouldn't a patch with fix be released as
> > >> 1.2.2?
> > >> Steffen
> > >>
> > >>> On Thu, Jul 26, 2018 at 5:33 PM Sheng Zha 
> wrote:
> > >>>
> > >>> Dear users and developers of Apache MXNet (Incubating),
> > >>>
> > >>> Thanks to Tong's dedication, the root cause for this issue was
> > identified
> > >>> to be instability in OpenBLAS's latest stable version 0.3.1. For
> > details,
> > >>> see Tong's comment
> > >>> <
> > >>> https://github.com/apache/incubator-mxnet/issues/11853#
> > issuecomment-408272772
> > >>>>
> > >>> .
> > >>>
> > >>> Since both the nightly build and the 1.2.1 wheels are affected, we
> > >>> recommend that we stay on OpenBLAS last known stable version 0.2.20
> > that
> > >>> we've been using. I will assume lazy consensus and prepare the fix
> > >>> (1.2.1.post0).
> > >>>
> > >>> -sz
> > >>>
> > >>>> On Tue, Jul 24, 2018 at 3:35 PM, Tong He  wrote:
> > >>>>
> > >>>> Recently there's an issue regarding the inconsistent result from
> gluon
> > >>>> forward:
> > >>>>
> > >>>> https://github.com/apache/incubator-mxnet/issues/11853
> > >>>>
> > >>>> Given a constant input image and loaded pretrained parameters, we
> > expect
> > >>> a
> > >>>> deterministic output from arbitrary repeats of forwards. However
> from
> > the
> > >>>> issue I see that the forwarded result is non-determinstic. It is
> > harmful
> > >>> as
> > >>>> it makes the results from experments/benchmarks/inference
> > meaningless.
> > >>>>
> > >>>> Therefore I propose to block the 1.3 release before it gets
> resolved.
> > >>>>
> > >>>
> >
>


Re: Release blocker: non-determinstic forward in gluon

2018-07-27 Thread Sheng Zha
Forgot to mention, the post release version is a pip package version.

-sz

> On Jul 27, 2018, at 10:42 AM, Sheng Zha  wrote:
> 
> In this case we can regard it as a release problem, which is usually what 
> post release versions are for. It’s still the same release with different 
> dependency, so there is no code change needed.
> 
> -sz
> 
> 
>> On Jul 27, 2018, at 8:31 AM, Steffen Rochel  wrote:
>> 
>> Hi Tong - thanks for root causing the problem.
>> Sheng - what is 1.2.1.post0? Shouldn't a patch with fix be released as
>> 1.2.2?
>> Steffen
>> 
>>> On Thu, Jul 26, 2018 at 5:33 PM Sheng Zha  wrote:
>>> 
>>> Dear users and developers of Apache MXNet (Incubating),
>>> 
>>> Thanks to Tong's dedication, the root cause for this issue was identified
>>> to be instability in OpenBLAS's latest stable version 0.3.1. For details,
>>> see Tong's comment
>>> <
>>> https://github.com/apache/incubator-mxnet/issues/11853#issuecomment-408272772
>>>> 
>>> .
>>> 
>>> Since both the nightly build and the 1.2.1 wheels are affected, we
>>> recommend that we stay on OpenBLAS last known stable version 0.2.20 that
>>> we've been using. I will assume lazy consensus and prepare the fix
>>> (1.2.1.post0).
>>> 
>>> -sz
>>> 
>>>> On Tue, Jul 24, 2018 at 3:35 PM, Tong He  wrote:
>>>> 
>>>> Recently there's an issue regarding the inconsistent result from gluon
>>>> forward:
>>>> 
>>>> https://github.com/apache/incubator-mxnet/issues/11853
>>>> 
>>>> Given a constant input image and loaded pretrained parameters, we expect
>>> a
>>>> deterministic output from arbitrary repeats of forwards. However from the
>>>> issue I see that the forwarded result is non-determinstic. It is harmful
>>> as
>>>> it makes the results from experments/benchmarks/inference meaningless.
>>>> 
>>>> Therefore I propose to block the 1.3 release before it gets resolved.
>>>> 
>>> 


Re: Release blocker: non-determinstic forward in gluon

2018-07-27 Thread Sheng Zha
In this case we can regard it as a release problem, which is usually what post 
release versions are for. It’s still the same release with different 
dependency, so there is no code change needed.

-sz


> On Jul 27, 2018, at 8:31 AM, Steffen Rochel  wrote:
> 
> Hi Tong - thanks for root causing the problem.
> Sheng - what is 1.2.1.post0? Shouldn't a patch with fix be released as
> 1.2.2?
> Steffen
> 
>> On Thu, Jul 26, 2018 at 5:33 PM Sheng Zha  wrote:
>> 
>> Dear users and developers of Apache MXNet (Incubating),
>> 
>> Thanks to Tong's dedication, the root cause for this issue was identified
>> to be instability in OpenBLAS's latest stable version 0.3.1. For details,
>> see Tong's comment
>> <
>> https://github.com/apache/incubator-mxnet/issues/11853#issuecomment-408272772
>>> 
>> .
>> 
>> Since both the nightly build and the 1.2.1 wheels are affected, we
>> recommend that we stay on OpenBLAS last known stable version 0.2.20 that
>> we've been using. I will assume lazy consensus and prepare the fix
>> (1.2.1.post0).
>> 
>> -sz
>> 
>>> On Tue, Jul 24, 2018 at 3:35 PM, Tong He  wrote:
>>> 
>>> Recently there's an issue regarding the inconsistent result from gluon
>>> forward:
>>> 
>>> https://github.com/apache/incubator-mxnet/issues/11853
>>> 
>>> Given a constant input image and loaded pretrained parameters, we expect
>> a
>>> deterministic output from arbitrary repeats of forwards. However from the
>>> issue I see that the forwarded result is non-determinstic. It is harmful
>> as
>>> it makes the results from experments/benchmarks/inference meaningless.
>>> 
>>> Therefore I propose to block the 1.3 release before it gets resolved.
>>> 
>> 


Re: Release blocker: non-determinstic forward in gluon

2018-07-26 Thread Sheng Zha
Dear users and developers of Apache MXNet (Incubating),

Thanks to Tong's dedication, the root cause for this issue was identified
to be instability in OpenBLAS's latest stable version 0.3.1. For details,
see Tong's comment

.

Since both the nightly build and the 1.2.1 wheels are affected, we
recommend that we stay on OpenBLAS last known stable version 0.2.20 that
we've been using. I will assume lazy consensus and prepare the fix
(1.2.1.post0).

-sz

On Tue, Jul 24, 2018 at 3:35 PM, Tong He  wrote:

> Recently there's an issue regarding the inconsistent result from gluon
> forward:
>
> https://github.com/apache/incubator-mxnet/issues/11853
>
> Given a constant input image and loaded pretrained parameters, we expect a
> deterministic output from arbitrary repeats of forwards. However from the
> issue I see that the forwarded result is non-determinstic. It is harmful as
> it makes the results from experments/benchmarks/inference meaningless.
>
> Therefore I propose to block the 1.3 release before it gets resolved.
>


Re: Request for feedback: proposal for MXNet SDK Office hours

2018-07-23 Thread Sheng Zha
Thanks for the clarification, Naveen. I'd recommend against having wiki or
other mutable document for such discussion, because people's response (or
the lack of it in the case of lazy consensus) is only toward the version
they saw, which can be changed. Rather, it would likely be a better idea to
include all the key points in the discussion thread directly (like you just
did), so that everyone at any point in time can see the same thing.

-sz

On Mon, Jul 23, 2018 at 10:12 PM, Naveen Swamy  wrote:

> Sheng,
> It is in the wiki, I also added a TOC to find it easily.
> https://cwiki.apache.org/confluence/display/MXNET/
> PROPOSAL%3A+Apache+MXNet%28Incubating%29+Office+Hours#
> PROPOSAL:ApacheMXNet(Incubating)OfficeHours-How
> ?
> How?
>
> Developers would have 1 hour every week to dedicate to office hours
> meeting. Typical flow for process is like this:
>
>-
>
>at least 24 hours before office hours session user signs up for one of 2
>slots (each slot is 30 minutes) by filing jira issue. In that issue user
>will provide questions/concerns and relevant details pertaining to
> subject.
>-
>
>before or on a day *preceding* office hours session the developer who
>leads office-hours for that week reviews existing queue
><https://issues.apache.org/jira/issues/?jql=Project%3D%
> 22Apache%20MXNet%22%20and%20issuetype%3D%22Office%20hours%22%20%20and%
> 20component%20in%20(Keras%2C%20Gluon%2C%20%22Scala%20API%
> 22%2C%20%22Java%20API%22%2C%20ModelServer%2C%20ONNX)>
> of filed issues and investigates 1 or 2 filed for upcoming session. The
>goal is to prepare for session as much as possible in advance.
>-
>
>   Every week one of the Apache MXNet community members
>   (committer/developer) could drive this effort in each area that
> is offered
>   is support with.
>   -
>
>   if necessary they could to engage SME that has a lot of expertise in
>   area relevant to question/issue filed.
>   -
>
>at a scheduled time the developer leading office hours dials into
>meeting bridge and verifies that corresponding user has joined the line.
>-
>
>   if by the end of time slot issue/question has not been fully
>   addressed, developer would propose to take further conversation to
> the
>   public forum(dev@ list or JIRA). This way office hours slots won't
>   spill over and both slots could be accommodated for.
>   -
>
>if any of the questions have not been fully addressed during session,
>developer will follow up and address outstanding scope of
> issue/question.
>Corresponding jira issue filed for session should be used as the outlet
> for
>following up.
>-
>
>   one possible follow up could end up being new feature request or bug
>   fix. If that is the case - developers would convert corresponding
> office
>   hours issue into normal GitHub issue.
>   -
>
>   We request SMEs to help in following up by the issues.
>   - At the end of the office hours conversation, developer who helped
>the user would summarize their interaction on the JIRA filed.
>
>
>
> On Mon, Jul 23, 2018 at 10:04 PM, Sheng Zha  wrote:
>
> > Hi Naveen,
> >
> > While your enthusiasm is certainly appreciated, next time, shall we
> include
> > the "new Issue Type" in the discussion first? I found no prior mention on
> > this.
> >
> > Also, a reminder to everyone that next time, let's respect Apache Infra's
> > time by following the instructions to have an Apache mentor to create
> issue
> > after discussion, instead of "just create". Thanks.
> >
> > -sz
> >
> > On Mon, Jul 23, 2018 at 7:36 PM, Naveen Swamy 
> wrote:
> >
> > > Hey All, just created a INFRA ticket(https://issues.apache.o
> > > rg/jira/browse/INFRA-16805)  requesting a new Issue Type "Office Hours"
> > on
> > > JIRA to better manage and support Office hours request.
> > >
> > > One feedback I received was that  "Apache" was neither mentioned in the
> > > discussion nor in the PROPOSAL on the wiki. This is a valid feedback
> and
> > I
> > > have fixed the PROPOSAL.
> > > I propose the office hours under discussion should be explicitly called
> > > "Apache MXNet Office hours".
> > >
> > > Also, Apache INFRA asked to create INFRA tickets only through mentors
> > >
> > > Can one of the mentors kindly help take this ticket forward.
> > >
> > > Thanks, Naveen
> > >
> > >
> > >
> > >
> > > On Thu, Jul 19

Re: Request for feedback: proposal for MXNet SDK Office hours

2018-07-23 Thread Sheng Zha
Hi Naveen,

While your enthusiasm is certainly appreciated, next time, shall we include
the "new Issue Type" in the discussion first? I found no prior mention on
this.

Also, a reminder to everyone that next time, let's respect Apache Infra's
time by following the instructions to have an Apache mentor to create issue
after discussion, instead of "just create". Thanks.

-sz

On Mon, Jul 23, 2018 at 7:36 PM, Naveen Swamy  wrote:

> Hey All, just created a INFRA ticket(https://issues.apache.o
> rg/jira/browse/INFRA-16805)  requesting a new Issue Type "Office Hours" on
> JIRA to better manage and support Office hours request.
>
> One feedback I received was that  "Apache" was neither mentioned in the
> discussion nor in the PROPOSAL on the wiki. This is a valid feedback and I
> have fixed the PROPOSAL.
> I propose the office hours under discussion should be explicitly called
> "Apache MXNet Office hours".
>
> Also, Apache INFRA asked to create INFRA tickets only through mentors
>
> Can one of the mentors kindly help take this ticket forward.
>
> Thanks, Naveen
>
>
>
>
> On Thu, Jul 19, 2018 at 10:01 AM, Pedro Larroy <
> pedro.larroy.li...@gmail.com
> > wrote:
>
> > Yes Naveen, I think you are saying exactly the same as I hinted. Sheng
> also
> > agreed with this.
> >
> > Pedro.
> >
> > On Thu, Jul 19, 2018 at 6:54 PM Naveen Swamy  wrote:
> >
> > > I do not think there needs to be a distinction made for
> > > support/office-hours by committer or contributors(in this case Amazon
> > > employed contributors) -- correct me if I misunderstood your guess :).
> > > Like I said, I would rather call it MXNet Office hours and categorize
> the
> > > kind of support that is offered, we might be able to find contributors
> > > willing to do this in different parts of the world regardless of their
> > day
> > > job or not.
> > >
> > > On Thu, Jul 19, 2018 at 9:21 AM, Sheng Zha  wrote:
> > >
> > > > I'm guessing Mu's intention is to make it clear that such invitation
> is
> > > > extended by teams in Amazon/AWS instead of by committers, so as to
> > avoid
> > > > the confusion of the naming "MXNet SDK". Suggestions to achieve the
> > same
> > > > goal are welcome.
> > > >
> > > > Best regards,
> > > > -sz
> > > >
> > > > On Thu, Jul 19, 2018 at 9:09 AM, Isabel Drost-Fromm <
> isa...@apache.org
> > >
> > > > wrote:
> > > >
> > > > >
> > > > >
> > > > > On 18/07/18 23:30, Mu Li wrote:
> > > > >
> > > > >> A minor suggestion: rename MXNet SDK to AWS MXNet SDK or Amazon
> > MXNet
> > > > SDK.
> > > > >>
> > > > >
> > > > > What exactly is the Amazon MXNet SDK? What is the AWS MXNet SDK?
> > > > >
> > > > > Your suggestion triggered my question because:
> > > > >
> > > > > https://www.apache.org/foundation/marks/#products
> > > > >
> > > > >
> > > > > Isabel
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: Request for feedback: proposal for MXNet SDK Office hours

2018-07-19 Thread Sheng Zha
Thanks. Yeah, I think naming after focused area instead of Amazon internal
organization/team names would likely work better. That way, others who'd
like to offer help in that area can just jump in and start helping.

-sz

On Thu, Jul 19, 2018 at 9:51 AM, Pedro Larroy 
wrote:

> There's an MXNet comitter in our office hours (Marco de Abreu @marcoabreu),
> others are contributors such as Anton (@lebeg) or others. I think we could
> refocus the conversation to the point that the office hours might have some
> emphasis in a particular area of MXNet.
>
> On Thu, Jul 19, 2018 at 6:21 PM Sheng Zha  wrote:
>
> > I'm guessing Mu's intention is to make it clear that such invitation is
> > extended by teams in Amazon/AWS instead of by committers, so as to avoid
> > the confusion of the naming "MXNet SDK". Suggestions to achieve the same
> > goal are welcome.
> >
> > Best regards,
> > -sz
> >
> > On Thu, Jul 19, 2018 at 9:09 AM, Isabel Drost-Fromm 
> > wrote:
> >
> > >
> > >
> > > On 18/07/18 23:30, Mu Li wrote:
> > >
> > >> A minor suggestion: rename MXNet SDK to AWS MXNet SDK or Amazon MXNet
> > SDK.
> > >>
> > >
> > > What exactly is the Amazon MXNet SDK? What is the AWS MXNet SDK?
> > >
> > > Your suggestion triggered my question because:
> > >
> > > https://www.apache.org/foundation/marks/#products
> > >
> > >
> > > Isabel
> > >
> > >
> >
>


Re: Request for feedback: proposal for MXNet SDK Office hours

2018-07-19 Thread Sheng Zha
I'm guessing Mu's intention is to make it clear that such invitation is
extended by teams in Amazon/AWS instead of by committers, so as to avoid
the confusion of the naming "MXNet SDK". Suggestions to achieve the same
goal are welcome.

Best regards,
-sz

On Thu, Jul 19, 2018 at 9:09 AM, Isabel Drost-Fromm 
wrote:

>
>
> On 18/07/18 23:30, Mu Li wrote:
>
>> A minor suggestion: rename MXNet SDK to AWS MXNet SDK or Amazon MXNet SDK.
>>
>
> What exactly is the Amazon MXNet SDK? What is the AWS MXNet SDK?
>
> Your suggestion triggered my question because:
>
> https://www.apache.org/foundation/marks/#products
>
>
> Isabel
>
>


[RESULT][VOTE] Subscribe dev@ to Github Activities

2018-07-19 Thread Sheng Zha
Hi,

The vote concluded at 9PM today (2018/07/18), and here are the results:
+1
Timur Shenkao
Aaron Markham
Lin Yuan
Anirudh Acharya
Junru Shao
Yizhi Liu (committer)
Zhi Zhang (committer)
Tianqi Chen (committer)
Sheng Zha (committer)

-1
Qing Lan
Rahul Huilgol
K, S
Anirudh (committer)
Chris Olivier (committer)

The vote thread can be found here
<https://lists.apache.org/thread.html/31bf3463fe4c0565f8c0dd18bfd1b2261b0cb03edd19b81b92d1255f@%3Cdev.mxnet.apache.org%3E>
(and
here
<https://lists.apache.org/thread.html/38840446a0c07d19ded23fcce2d90ff31ebcf07034fec5b418553ab2@%3Cdev.mxnet.apache.org%3E>).
The original discussion thread can be found here
<https://lists.apache.org/thread.html/3d883f6a3cbc8e81e810962e0c0fe7bfd01f0b78d3cb44034f566442@%3Cdev.mxnet.apache.org%3E>
.

According to Apache Voting Process
<https://www.apache.org/foundation/voting.html>, this procedural vote has
now passed.

However, some concerns has come up as part of the discussion on the voting
thread. Specifically, they are about increase in the amount of traffic (and
potentially noise) on dev list, and guidelines on how to manage them. In
light of that, I'd like to request that mentors help us connect with Apache
Infra team and help us explore our options on Github-Dev integration, and
hopefully we could find a solution that achieves the best balance.

For those who are interested, let's follow the best practice and continue
the discussion on managing the integration of Github with dev list in
parallel in the discussion thread
<https://lists.apache.org/thread.html/3d883f6a3cbc8e81e810962e0c0fe7bfd01f0b78d3cb44034f566442@%3Cdev.mxnet.apache.org%3E>.
Thank you.

-sz


Re: [VOTE] Subscribe dev@ to Github Activities

2018-07-18 Thread Sheng Zha
Thanks, I hear the concerns and it's not my intention to push people off
the list. On the other hand, I think github discussions are no more
"artificial" than discussions on dev list, and the good and important
discussions warrant the same amount of attention. With this vote, I intend
to make contributors' life easier by decoupling the recognized forum from
the technology they use, and that github contributors can easily
communicate with the community on the list.

-sz

On Wed, Jul 18, 2018 at 9:05 AM, Barber, Christopher <
christopher.bar...@analog.com> wrote:

> Can't you simply tell contributors to discuss changes on dev before
> submitting a PR? Since the contribution guidelines don't tell developers to
> post to dev, why would you expect them to do that?
>
> Is there an easy way to just subscribe to PR notifications or will someone
> have to write a filter to avoid spamming dev with all GitHub notifications?
> I think that if dev gets too much traffic, then people with casual interest
> may find it easier to unsubscribe than to set up filters. Once someone
> unsubscribes, they probably won't be coming back soon, so you should be
> very careful with this.
>
> I don't see why artificially increasing the traffic on dev will do
> anything to grow the community in any case.
>
> - C
>
> On 7/18/18, 11:17 AM, "Indhu"  wrote:
>
> Some mentors/contributors/committees feel that the amount of
> discussions in
> dev list is too less given the amount of commits that happen and more
> discussions need to happen in the dev list to grow the community.
>
> In response some committees feel discussions actually happen in GitHub
> PRs.
> If the policy says "if it didn't happen in dev, it didn't happen",
> let's
> forward all GitHub discussions to dev so those discussions would count.
> That's the motivation for this vote.
>
> I think when people say there needs to be more discussions in the dev
> list,
> I assume they mean the kind of discussions that happen *before* a PR is
> created or even before someone starts working on anything. I don't
> think
> people are asking an email for every activity on GitHub. The correct
> way to
> address the problem would be for committees/contributors to stop
> communicating in private channels (like Amazon or DMLC communication
> channels) and do those discussions in the dev list instead.
>
> Indu
>
>
> On Wed, Jul 18, 2018, 5:51 AM Barber, Christopher <
> christopher.bar...@analog.com> wrote:
>
> > Can't people already subscribe to github notifications? I think it
> is safe
> > to assume that developers are already smart enough to figure out how
> to do
> > that if they want. What problem are you really trying to solve here?
> >
> > On 7/18/18, 4:49 AM, "Chris Olivier"  wrote:
> >
> > -1.  (changed from -0.9)
> >
> > seems more like a strategy (whether intentional or on accident)
> to
> > *not*
> > have design discussions on dev by flooding it with noise and
> then later
> > claim it was discussed, even though you would have to sift
> through
> > thousands of emails to find it.
> >
> >
> >
> > On Wed, Jul 18, 2018 at 12:42 AM Rahul Huilgol <
> rahulhuil...@gmail.com
> > >
> > wrote:
> >
> > > I pulled up some more stats so we can make an informed
> decision.
> > >
> > > Here are some popular Apache projects and the number of emails
> to
> > their
> > > dev@
> > > list in the last 30 days
> > > Apache Flink: 540 mails
> > > ​Apache Spark: 249 mails
> > > Apache Hive: 481 mails
> > > Apache HBase: 300 mails
> > >
> > > Current dev list for MXNet: 348 mails
> > > Current commits list for MXNet: 5329 mails
> > > Making the proposed dev list for MXNet to be ~5677 mails.
> > >
> > > Sheng, even going by your comments that 1 of of those 4 mails
> are
> > relevant
> > > for dev@, that's still a really high number of emails. (130
> email
> > lists
> > > doesn't say anything if we ignore the actual number of emails
> in
> > those
> > > lists, especially when the 131st sends these many mails :) ).
> People
> > are
> > > already talking about setting up filters here. Doesn't that
> defeat
> > the
> > > purpose by making people filter out the discussion on Github?
> People
> > can
> > > subscribe to commits@ if they find it more convenient to
> follow
> > Github
> > > activity over email rather than Github.com.
> > >
> > > We should strive to maintain dev@ as a place for high quality
> > discussion.
> > > It's upto the contributors to bring up something to dev@ if
> they
> > believe
> > > it
> > > deserves a focused discussion in the community. That
> discussion may
>

Re: [VOTE] Subscribe dev@ to Github Activities

2018-07-18 Thread Sheng Zha
A discussion is a discussion, and in the case of MXNet I’d say a lot more high 
quality discussion has happened on GitHub than on dev@. Github issues have 
plenty of discussions before code change. The reason is simply because MXNet 
has longer history on Github than the existence of dev list, and long-term 
contributors tend to bring high quality discussion on dev@.

I don’t intend to flood the dev list with this vote. There are high quality 
discussions on the github that people on dev list can benefit from, and that’s 
the only intention for such change. The community feedback will help decide how 
to best integrate these two communication tools. However, some good solutions 
such as the “opt-in w/ github mention” that Qing has brought up will require 
exploration with apache infra team, which requires a vote to show the will of 
the community first.

-sz


> On Jul 18, 2018, at 8:16 AM, Indue  wrote:
> 
> Some mentors/contributors/committees feel that the amount of discussions in
> dev list is too less given the amount of commits that happen and more
> discussions need to happen in the dev list to grow the community.
> 
> In response some committees feel discussions actually happen in GitHub PRs.
> If the policy says "if it didn't happen in dev, it didn't happen", let's
> forward all GitHub discussions to dev so those discussions would count.
> That's the motivation for this vote.
> 
> I think when people say there needs to be more discussions in the dev list,
> I assume they mean the kind of discussions that happen *before* a PR is
> created or even before someone starts working on anything. I don't think
> people are asking an email for every activity on GitHub. The correct way to
> address the problem would be for committees/contributors to stop
> communicating in private channels (like Amazon or DMLC communication
> channels) and do those discussions in the dev list instead.
> 
> Indu
> 
> 
> On Wed, Jul 18, 2018, 5:51 AM Barber, Christopher <
> christopher.bar...@analog.com> wrote:
> 
>> Can't people already subscribe to github notifications? I think it is safe
>> to assume that developers are already smart enough to figure out how to do
>> that if they want. What problem are you really trying to solve here?
>> 
>> On 7/18/18, 4:49 AM, "Chris Olivier"  wrote:
>> 
>>-1.  (changed from -0.9)
>> 
>>seems more like a strategy (whether intentional or on accident) to
>> *not*
>>have design discussions on dev by flooding it with noise and then later
>>claim it was discussed, even though you would have to sift through
>>thousands of emails to find it.
>> 
>> 
>> 
>>On Wed, Jul 18, 2018 at 12:42 AM Rahul Huilgol >> 
>>wrote:
>> 
>>> I pulled up some more stats so we can make an informed decision.
>>> 
>>> Here are some popular Apache projects and the number of emails to
>> their
>>> dev@
>>> list in the last 30 days
>>> Apache Flink: 540 mails
>>> ​Apache Spark: 249 mails
>>> Apache Hive: 481 mails
>>> Apache HBase: 300 mails
>>> 
>>> Current dev list for MXNet: 348 mails
>>> Current commits list for MXNet: 5329 mails
>>> Making the proposed dev list for MXNet to be ~5677 mails.
>>> 
>>> Sheng, even going by your comments that 1 of of those 4 mails are
>> relevant
>>> for dev@, that's still a really high number of emails. (130 email
>> lists
>>> doesn't say anything if we ignore the actual number of emails in
>> those
>>> lists, especially when the 131st sends these many mails :) ). People
>> are
>>> already talking about setting up filters here. Doesn't that defeat
>> the
>>> purpose by making people filter out the discussion on Github? People
>> can
>>> subscribe to commits@ if they find it more convenient to follow
>> Github
>>> activity over email rather than Github.com.
>>> 
>>> We should strive to maintain dev@ as a place for high quality
>> discussion.
>>> It's upto the contributors to bring up something to dev@ if they
>> believe
>>> it
>>> deserves a focused discussion in the community. That discussion may
>> be
>>> started by the person who proposes code changes, or a reviewer who
>> believes
>>> that a particular code change warrants further discussion.
>>> 
>>> Regards,
>>> Rahul
>>> 
>> 
>> 
>> 


Re: [VOTE] Subscribe dev@ to Github Activities

2018-07-18 Thread Sheng Zha
From the linked discussion thread you can find comments that Flink does and 
Spark used to but not any more.

I don’t intend to claim anything on this vote thread, though one thing is 
clear: without this change, github activity doesn’t count as happening per 
apache standard, because it didn’t happen on dev@. I do personally like 
Tianqi’s take that this will help us discover contributors who are prolific in 
development besides those in writing emails.

-sz

> On Jul 17, 2018, at 11:21 PM, Chris Olivier  wrote:
> 
> -0.9
> 
> Do any other Apache projects do this? Seems really odd. Jira was posting to
> dev for maybe 3 days and people were complaining like crazy about the
> noise, and that was just a few tickets. Now we’re talking about possibly
> hundreds of emails per day. ALL PR comments, commit notificatios, issue
> movement, tagging, etc.
> 
> It’s hard to imagine how this would be useful.
> 
> Also, does this also mean that claiming that anything said or done in
> github “was discusssd on dev”?
> 
> -C
> 
>> On Tue, Jul 17, 2018 at 2:24 PM Sheng Zha  wrote:
>> 
>> Thanks, Rahul. Out of the 4 conversations you listed that you think are not
>> necessary, I actually think the PR on coreml tool may be worth discussing.
>> For example, should it (and other tools) have a separate repo, and should
>> its version management be tied to mxnet.
>> 
>> And on:
>> 
>>> If people are forced to setup filters to parse these mails, then we are
>> *ensuring*
>> people don't get their eyes on valuable discussions on dev@.
>> 
>> I think this argument is based more on emotion than on reason. I subscribe
>> to over 130 email lists for work, lots of which has PR/commit updates that
>> are not my immediate concern, and it hasn't prevented me from reading
>> valuable discussions.
>> 
>> -sz
>> 
>> On Tue, Jul 17, 2018 at 1:05 PM, Rahul Huilgol 
>> wrote:
>> 
>>> -1
>>> 
>>> We had such a thing before and people asked for the mails to be
>> redirected
>>> to a different list commits@ because of the flood of mails.
>>> 
>>> https://lists.apache.org/thread.html/8b834e39110381fadb8a0ab59185a8
>>> f52b8406247a1f281f7d691392@%3Cdev.mxnet.apache.org%3E
>>> 
>>> I don't know if people have a sense of the volume of mails this can add
>>> here. Here's the stats from the commits@ email list we have. I'd be
>>> curious
>>> to see how many subscribers we have to that. Hopefully the people voting
>> +1
>>> here subscribed to that :)
>>> 
>>> 2018 June: 4617
>>> 2018 July: (half a month) 3106
>>> (Source of the numbers are here
>>> https://lists.apache.org/list.html?comm...@mxnet.apache.org:2018-7)
>>> 
>>> @Joshua: yes we need to bring 'valuable' (emphasis mine) discussion to a
>>> centralized place @dev. Does everything needs to be sent to dev@. For
>>> example, consider these recent PRs, why is it necessary for them to be
>>> forwarded to dev@?
>>> 
>>> fix flaky test test_operator_gpu.test_countsketch:
>>> https://github.com/apache/incubator-mxnet/pull/11780
>>> Update PyPI version number:
>>> https://github.com/apache/incubator-mxnet/pull/11773
>>> Fix file name creation for Windows:
>>> https://github.com/apache/incubator-mxnet/pull/11765
>>> [MXNET-8230] test_operator_gpu.test_rms fails:
>>> https://github.com/apache/incubator-mxnet/pull/11749
>>> 
>>> If people are forced to setup filters to parse these mails, then we are
>>> *ensuring* people don't get their eyes on valuable discussions on dev@.
>>> 
>>> Regards,
>>> Rahul
>>> 
>>>> On Tue, Jul 17, 2018 at 12:49 PM, Sheng Zha  wrote:
>>>> 
>>>> FWIW: "from:notificati...@github.com AND
>> to:dev@mxnet.incubator.apache.
>>> org
>>>> AND NOT to:me" but I'm sure you get the gist :)
>>>> 
>>>> 
>>>> Opt-in model applies to individuals rather than the dev list, because
>> the
>>>> dev list is intended as an asynchronous way for new comers to easily
>>> follow
>>>> past technical discussions, and is the only place recognized by apache
>>> for
>>>> these discussions. Currently, lots of high quality technical
>> discussions
>>>> that are happening on github are lost and not archived here. The
>>> procedural
>>>> change in this vote is intended for bridging such gap. Besides, it's
>> more
>>>> likely for new contribu

Re: [VOTE] Subscribe dev@ to Github Activities

2018-07-17 Thread Sheng Zha
Thanks, Rahul. Out of the 4 conversations you listed that you think are not
necessary, I actually think the PR on coreml tool may be worth discussing.
For example, should it (and other tools) have a separate repo, and should
its version management be tied to mxnet.

And on:

> If people are forced to setup filters to parse these mails, then we are 
> *ensuring*
people don't get their eyes on valuable discussions on dev@.

I think this argument is based more on emotion than on reason. I subscribe
to over 130 email lists for work, lots of which has PR/commit updates that
are not my immediate concern, and it hasn't prevented me from reading
valuable discussions.

-sz

On Tue, Jul 17, 2018 at 1:05 PM, Rahul Huilgol 
wrote:

> -1
>
> We had such a thing before and people asked for the mails to be redirected
> to a different list commits@ because of the flood of mails.
>
> https://lists.apache.org/thread.html/8b834e39110381fadb8a0ab59185a8
> f52b8406247a1f281f7d691392@%3Cdev.mxnet.apache.org%3E
>
> I don't know if people have a sense of the volume of mails this can add
> here. Here's the stats from the commits@ email list we have. I'd be
> curious
> to see how many subscribers we have to that. Hopefully the people voting +1
> here subscribed to that :)
>
> 2018 June: 4617
> 2018 July: (half a month) 3106
> (Source of the numbers are here
> https://lists.apache.org/list.html?comm...@mxnet.apache.org:2018-7)
>
> @Joshua: yes we need to bring 'valuable' (emphasis mine) discussion to a
> centralized place @dev. Does everything needs to be sent to dev@. For
> example, consider these recent PRs, why is it necessary for them to be
> forwarded to dev@?
>
> fix flaky test test_operator_gpu.test_countsketch:
> https://github.com/apache/incubator-mxnet/pull/11780
> Update PyPI version number:
> https://github.com/apache/incubator-mxnet/pull/11773
> Fix file name creation for Windows:
> https://github.com/apache/incubator-mxnet/pull/11765
> [MXNET-8230] test_operator_gpu.test_rms fails:
> https://github.com/apache/incubator-mxnet/pull/11749
>
> If people are forced to setup filters to parse these mails, then we are
> *ensuring* people don't get their eyes on valuable discussions on dev@.
>
> Regards,
> Rahul
>
> On Tue, Jul 17, 2018 at 12:49 PM, Sheng Zha  wrote:
>
> > FWIW: "from:notificati...@github.com AND to:dev@mxnet.incubator.apache.
> org
> > AND NOT to:me" but I'm sure you get the gist :)
> >
> >
> > Opt-in model applies to individuals rather than the dev list, because the
> > dev list is intended as an asynchronous way for new comers to easily
> follow
> > past technical discussions, and is the only place recognized by apache
> for
> > these discussions. Currently, lots of high quality technical discussions
> > that are happening on github are lost and not archived here. The
> procedural
> > change in this vote is intended for bridging such gap. Besides, it's more
> > likely for new contributors to know how to filter emails than to know how
> > to "opt-in".
> >
> >
> > More discussion is welcome in the linked discussion thread.
> >
> >
> > -sz
> >
> > On Tue, Jul 17, 2018 at 12:37 PM, pracheer gupta <
> > pracheer_gu...@hotmail.com
> > > wrote:
> >
> > > FWIW: The filter needs to be more complicated than just "
> > > from:notificati...@github.com". After all, if someone mentions me
> > > directly in PR thread and/or I subscribe to only a particular PR, those
> > > emails will also come from "notificati...@github.com". There are ways
> > > around that though.
> > >
> > >
> > > It might be good to mention this filter in some wiki/webpage somewhere;
> > > may save some effort for people trying to find the right set of
> filters.
> > It
> > > could even be in the welcome email when one subscribes to this
> > email-list.
> > >
> > >
> > > Another alternate option: How about choosing an opt-in model rather
> than
> > > an opt-out model? Having another email list and anyone can subscribe to
> > it
> > > if they wish.
> > >
> > >
> > > Not sure if there is a perfect answer out there for this but in
> principle
> > > I agree that it will be good to have "push notifications" for all
> > PRs/issue.
> > >
> > >
> > > -Pracheer
> > >
> > > 
> > > From: Junru Shao 
> > > Sent: Tuesday, July 17, 2018 10:58:33 AM
> > > To: d...@mxnet.apache.org
> > > Subject: Re: [VOTE] Subscribe dev@ t

Re: [VOTE] Subscribe dev@ to Github Activities

2018-07-17 Thread Sheng Zha
Hi S,

Keeping a separate list defeats the purpose, because then such conversation
is again not happening on dev, which is deemed to be in the "did not
happen" category. Also, conversations that are not relevant to you are
already happening on the list, and you're under no obligation to read them
all.

-sz

On Tue, Jul 17, 2018 at 1:20 PM, K, S  wrote:

> -1
>
> Keeping a separate email list for subscribing to github activities seems
> like a better idea. One can always reference the issue/discussion/PR in the
> dev list to initiate conversation. Biggest concern is that important
> discussion can get buried in a flood of emails that are not completely
> relevant to me.
>
> SK
>
> On 7/17/18, 1:07 PM, "Rahul Huilgol"  wrote:
>
> -1
>
> We had such a thing before and people asked for the mails to be
> redirected
> to a different list commits@ because of the flood of mails.
>
> https://lists.apache.org/thread.html/8b834e39110381fadb8a0ab59185a8
> f52b8406247a1f281f7d691392@%3Cdev.mxnet.apache.org%3E
>
> I don't know if people have a sense of the volume of mails this can add
> here. Here's the stats from the commits@ email list we have. I'd be
> curious
> to see how many subscribers we have to that. Hopefully the people
> voting +1
> here subscribed to that :)
>
> 2018 June: 4617
> 2018 July: (half a month) 3106
> (Source of the numbers are here
> https://lists.apache.org/list.html?comm...@mxnet.apache.org:2018-7)
>
> @Joshua: yes we need to bring 'valuable' (emphasis mine) discussion to
> a
> centralized place @dev. Does everything needs to be sent to dev@. For
> example, consider these recent PRs, why is it necessary for them to be
> forwarded to dev@?
>
> fix flaky test test_operator_gpu.test_countsketch:
> https://github.com/apache/incubator-mxnet/pull/11780
> Update PyPI version number:
> https://github.com/apache/incubator-mxnet/pull/11773
> Fix file name creation for Windows:
> https://github.com/apache/incubator-mxnet/pull/11765
> [MXNET-8230] test_operator_gpu.test_rms fails:
> https://github.com/apache/incubator-mxnet/pull/11749
>
> If people are forced to setup filters to parse these mails, then we are
> *ensuring* people don't get their eyes on valuable discussions on dev@
> .
>
> Regards,
> Rahul
>
> On Tue, Jul 17, 2018 at 12:49 PM, Sheng Zha 
> wrote:
>
> > FWIW: "from:notificati...@github.com AND
> to:dev@mxnet.incubator.apache.org
> > AND NOT to:me" but I'm sure you get the gist :)
> >
> >
> > Opt-in model applies to individuals rather than the dev list,
> because the
> > dev list is intended as an asynchronous way for new comers to easily
> follow
> > past technical discussions, and is the only place recognized by
> apache for
> > these discussions. Currently, lots of high quality technical
> discussions
> > that are happening on github are lost and not archived here. The
> procedural
> > change in this vote is intended for bridging such gap. Besides, it's
> more
> > likely for new contributors to know how to filter emails than to
> know how
> > to "opt-in".
> >
> >
> > More discussion is welcome in the linked discussion thread.
> >
> >
> > -sz
> >
> > On Tue, Jul 17, 2018 at 12:37 PM, pracheer gupta <
> > pracheer_gu...@hotmail.com
> > > wrote:
> >
> > > FWIW: The filter needs to be more complicated than just "
> > > from:notificati...@github.com". After all, if someone mentions me
> > > directly in PR thread and/or I subscribe to only a particular PR,
> those
> > > emails will also come from "notificati...@github.com". There are
> ways
> > > around that though.
> > >
> > >
> > > It might be good to mention this filter in some wiki/webpage
> somewhere;
> > > may save some effort for people trying to find the right set of
> filters.
> > It
> > > could even be in the welcome email when one subscribes to this
> > email-list.
> > >
> > >
> > > Another alternate option: How about choosing an opt-in model
> rather than
> > > an opt-out model? Having another email list and anyone can
> subscribe to
> > it
> > > if they wish.
> > >
> > >
> > > Not sure if there is a perfect answer out there for this but in
> princip

Re: [VOTE] Subscribe dev@ to Github Activities

2018-07-17 Thread Sheng Zha
FWIW: "from:notificati...@github.com AND to:dev@mxnet.incubator.apache.org
AND NOT to:me" but I'm sure you get the gist :)


Opt-in model applies to individuals rather than the dev list, because the
dev list is intended as an asynchronous way for new comers to easily follow
past technical discussions, and is the only place recognized by apache for
these discussions. Currently, lots of high quality technical discussions
that are happening on github are lost and not archived here. The procedural
change in this vote is intended for bridging such gap. Besides, it's more
likely for new contributors to know how to filter emails than to know how
to "opt-in".


More discussion is welcome in the linked discussion thread.


-sz

On Tue, Jul 17, 2018 at 12:37 PM, pracheer gupta  wrote:

> FWIW: The filter needs to be more complicated than just "
> from:notificati...@github.com". After all, if someone mentions me
> directly in PR thread and/or I subscribe to only a particular PR, those
> emails will also come from "notificati...@github.com". There are ways
> around that though.
>
>
> It might be good to mention this filter in some wiki/webpage somewhere;
> may save some effort for people trying to find the right set of filters. It
> could even be in the welcome email when one subscribes to this email-list.
>
>
> Another alternate option: How about choosing an opt-in model rather than
> an opt-out model? Having another email list and anyone can subscribe to it
> if they wish.
>
>
> Not sure if there is a perfect answer out there for this but in principle
> I agree that it will be good to have "push notifications" for all PRs/issue.
>
>
> -Pracheer
>
> 
> From: Junru Shao 
> Sent: Tuesday, July 17, 2018 10:58:33 AM
> To: d...@mxnet.apache.org
> Subject: Re: [VOTE] Subscribe dev@ to Github Activities
>
> +1
>
> Both GitHub activities and dev list are places for development. It will be
> great if we could have a all-in-one place for such discussions. I believe
> Sheng's proposal is a perfect solution.
>
> On 2018/07/16 03:32:06, Sheng Zha  wrote:
> > Hi,
> >
> > I'm starting a vote on subscribing dev@ to Github activities. See
> previous
> > discussion thread here
> > <https://lists.apache.org/thread.html/3d883f6a3cbc8e81e810962e0c0fe7
> bfd01f0b78d3cb44034f566442@%3Cdev.mxnet.apache.org%3E>
> > .
> >
> > The vote lasts for three days and ends on 7/18/2018 at 9pm pst.
> >
> > -sz
> >
>


Re: [VOTE] Subscribe dev@ to Github Activities

2018-07-17 Thread Sheng Zha
Hi Anirudh,

1. You need exactly one filter to filter out all the github notifications
on PRs and issues: "from:notificati...@github.com", and you'd get your S/N
ratio back.
2. Having the option to do design discussion on an issue or PR is actually
a good thing as many discussions are quite small and better accompanied by
code. If for some reason a merged design needs revisiting, there's still
the option of sending an email to dev@ and discuss about it.
3. About votes, commit vote (and veto) can already happen on PR per past
agreement. The discussion for procedural vote IMO should be allowed to
happen on Github if it's development related. Procedural votes themselves
should and can still happen on dev@.

About "you don't really have to do anything explicitly on the dev@ list",
besides the above arguments, we don't send emails to dev@ just for the
purpose of sending it. On the other hand, since "whatever didn't happen on
dev list didn't happen", we'd need better arguments on why we'd choose to
forego the transparency.

-sz

On Tue, Jul 17, 2018 at 8:47 AM, Anirudh  wrote:

> -1
>
> The low signal to noise ratio would mean that we may miss important emails.
> Even with the different filters that we may setup for dev@, the emails
> would be too many to not miss the important ones. We would see more and
> more people starting a design discussion on an issue or PR. Because of the
> low signal to noise ratio on the dev@ list, many may miss these
> discussions.
>
> Slowly, this would erode the purpose of the dev@ list as this means that
> you don't really have to do anything explicitly on the dev@ list.
> You can start a design discussion on a github issue. You can start a
> vote/discussion on a github issue.
>
> Anirudh
>
> On Mon, Jul 16, 2018 at 4:35 AM, Timur Shenkao  wrote:
>
> > +1 if my vote can be taken into account
> >
> > On Mon, Jul 16, 2018 at 4:32 AM, Sheng Zha  wrote:
> >
> > > Hi,
> > >
> > > I'm starting a vote on subscribing dev@ to Github activities. See
> > previous
> > > discussion thread here
> > > <https://lists.apache.org/thread.html/3d883f6a3cbc8e81e810962e0c0fe7
> > > bfd01f0b78d3cb44034f566442@%3Cdev.mxnet.apache.org%3E>
> > > .
> > >
> > > The vote lasts for three days and ends on 7/18/2018 at 9pm pst.
> > >
> > > -sz
> > >
> >
>


Re: [VOTE] Release MXNet version 1.2.1.RC1

2018-07-16 Thread Sheng Zha
I can confirm that #10989’s corresponding commit 022f238 is not on the release 
branch. Fix of #11772 on master is in #11776.

-sz

> On Jul 16, 2018, at 8:23 PM, Afrooze, Sina  
> wrote:
> 
> Thanks Anirudh for checking. Looks like the PR for adding model summary 
> feature (https://github.com/apache/incubator-mxnet/pull/10989) introduced 
> this regression. This change isn't included in 1.2.0 or 1.2.1 , so 1.2.1 
> should be good.
> 
> @szha: Can you please double check and confirm that 1.2.1 is indeed not 
> impacted?
> 
> - Sina
> 
> On 7/16/18, 5:46 PM, "Anirudh"  wrote:
> 
>Hi Sina,
> 
>I am unable to reproduce this issue on 1.2.1.
> 
>Anirudh
> 
>>On Mon, Jul 16, 2018 at 5:26 PM, Afrooze, Sina  wrote:
>> 
>> I know voting is over for this release, but I think this issue may warrant
>> delaying: https://github.com/apache/incubator-mxnet/issues/11772. Looks
>> like save_parameters doesn't fix the issue it is designed to fix (i.e. two
>> instances of the same network in the same session) if LSTMs are used.
>> 
>> - Sina
>> 
>> On 7/12/18, 3:27 PM, "Hao Jin"  wrote:
>> 
>>+1 Built on Ubuntu with CUDA 9.0 and CuDNN 7 and verified that sparse
>> tests
>>are passing.
>>Hao
>> 
>>On Thu, Jul 12, 2018 at 3:01 PM, Sergio Fernández 
>> wrote:
>> 
>>> +1 (binding)
>>> 
>>> On Mon, Jul 9, 2018, 16:53 Roshani Nagmote <
>> roshaninagmo...@gmail.com>
>>> wrote:
>>> 
 Hi all,
 
 I would like to propose a vote to release Apache MXNet (incubating)
>>> version
 1.2.1.RC1. Voting will start now (Monday, Jul 9th) and end at 5:50
>> PM
 PDT, Thursday, July 12th.
 
 Link to release candidate 1.2.1.rc1:
 *https://github.com/apache/incubator-mxnet/releases/tag/1.2.1.rc1
 >> *
 
 View this page, click on "Build from Source", and use the source
>> code
 obtained from 1.2.1.rc1 tag:
 https://mxnet.incubator.apache.org/install/index.html
 
 (Note: The README.md points to the 1.2.1 tag and does not work at
>> the
 moment.)
 
 Please remember to test first before voting accordingly:
 
 +1 = approve
 +0 = no opinion
 -1 = disapprove (provide reason)
 
 Thanks,
 Roshani
 
>>> 
>> 
>> 
>> 
>> 
> 
> 


  1   2   >