Re: Making new operators and AMP lists

2019-05-29 Thread Skalicky, Sam
Thanks Przemek for the additional explanation, but I’m still confused on this part. I don’t understand the explanation of the optimizer’s interaction here. >> The technical reason for that is the only place from which one can get MXNet >> operators is MXListAllOps call, which gives back all

Re: Making new operators and AMP lists

2019-05-29 Thread Sheng Zha
> The second misunderstanding is that the ask from the test is to somehow "add > support" for AMP in the operator. That is definitely not true and adding a > single line in either FP32_FUNCS (cast me always to FP32) or FP16_FP32_FUNCS > (do not do anything with me because I'm not relevant for

Re: Making new operators and AMP lists

2019-05-29 Thread Przemysław Trędak
Thank you all for responding. Let me address a few misunderstandings about AMP in the discussion so far: - I think the main misunderstanding in the discussion so far is that it would be somehow possible to implicitly cast to FP32 all previously unseen operators. I would much prefer such

Re: Making new operators and AMP lists

2019-05-28 Thread Jun Wu
Hi Marco, As it has been stated by others, the concern is not about how much cost it results in, but about whether it should be there from the beginning. Just imagine how you would explain to a MKLDNN developer who just added an operator for CPU computing that he/she must put the operator name in

Re: Making new operators and AMP lists

2019-05-28 Thread Anirudh Subramanian
The assumption is the AMP requirement is something that has a steep learning curve. Developers may get confused by the name, but the question the developer has to essentially answer is (and this can be added in the error): 1. If the operator can run in FP16 and FP32 modes, put it in

Re: Making new operators and AMP lists

2019-05-28 Thread Sheng Zha
This is driving people away exactly because they don't know this is what's asked of them, and why they are asked of this AMP requirement in the first place. To someone who's already familiar with the context, this is little to be worried about. It's now the process that requires everyone to

Re: Making new operators and AMP lists

2019-05-28 Thread Marco de Abreu
I'm having trouble in how far adding the name of an operator in a single file is too much to expect from somebody and how this is driving people away. If somebody adds a tutorial, they also have to add the tutorial to a specific file. As far as I can tell, this has not resulted in people not

Re: Making new operators and AMP lists

2019-05-28 Thread Sheng Zha
Please don't be dismissive. For a new contributor, every time we impose a requirement to the developer, a new task would pop up out of nowhere. The more we do that, the more costly it becomes for new contributors to have their contributions accepted, and hence the less likely they will stick

Re: Making new operators and AMP lists

2019-05-28 Thread Sheng Zha
AMP is in contrib so there's no guarantee that the API is final. Adopting the test as-is is harmful because operator authors should not be required to invest in an experimental feature that they are not aware of. I'm all for openness and welcoming, but think about whether you'd like to turn

Re: Making new operators and AMP lists

2019-05-28 Thread Marco de Abreu
Hey Jun, could we please quantify the amount of time and effort that is required to follow the actions to add an operator to the FP32_FUNCS? To me it sounds like we are making this a bigger deal than it actually is. -Marco On Wed, May 29, 2019 at 1:20 AM Jun Wu wrote: > Thanks for initiating

Re: Making new operators and AMP lists

2019-05-28 Thread Marco de Abreu
While AMP might be an experimental feature, I rather would like to put the focus on the maturity of its interfaces. If the interfaces and the actions developers have to do aren't finalized yet, I'd agree with disabling the test. But if the API is final and easy to use, I don't see why adopting

Re: Making new operators and AMP lists

2019-05-28 Thread Jun Wu
Thanks for initiating the discussion on dev. I understand the dilemma from designing AMP for making the feature usable and functional as well as for not breaking other developer experience. However, IMO, this is not about WHEN we should let other developers know they have made a mistake by not

Re: Making new operators and AMP lists

2019-05-28 Thread Sheng Zha
The support for AMP should not be a burden of authors of new operators. The lint analogy doesn't apply because lint is for established and accepted coding standard at MXNet and AMP is not. AMP is an experimental feature right now and it doesn't make sense to require contributors to invest in

Re: Making new operators and AMP lists

2019-05-28 Thread Anirudh Subramanian
Hi, I agree with Marco there are some easy wins to be had since many new GPU operators come with FP16 support. I think we can explore the overhead to the developer and try to reduce the feedback time for the developer, so that cost associated with adding support for AMP feature is minimized.

Re: Making new operators and AMP lists

2019-05-28 Thread Marco de Abreu
Hi, I'm generally in favour of these kind of tests since they make developers aware of changes they have to make which they would usually not be aware of. We have a similar test for tutorials, for example. Whenever somebody adds a tutorial, there's a validation that assures that all contraints in

Re: Making new operators and AMP lists

2019-05-28 Thread Sheng Zha
Thanks for initiating the discussion. The premise for adding the test was to make sure that AMP feature is "not broken", but that's IMO not the right view. AMP is not supposed to support a new operator it hasn't seen before in the first place. There's no way for it to know whether the fp32

Re: Making new operators and AMP lists

2019-05-28 Thread Anirudh Subramanian
Hi all, I had discussion with Przemyslaw about this offline. There are two options we can pursue to make developer experience better ( Since currently they have to wait for CI to complete): 1. Obtain the current lists and check if the length of the combined lists is same as MXListAllOpNames

Re: [DISCUSS] 1.5.0 Release Plan

2019-05-28 Thread Haibin Lin
free to let me know if anything our team can help :) > > > > BR, > > > > --Patric > > > > > -Original Message- > > > From: Lai Wei [mailto:roywei...@gmail.com] > > > Sent: Thursday, May 23, 2019 6:05 AM > > > To: dev@mxnet.

Re: [RFC] Support for creation of Large Tensors in MXNet

2019-05-28 Thread Marco de Abreu
o let me know if anything I can help. > > Thanks, > -tao > > > -Original Message- > From: Lin Yuan [mailto:apefor...@gmail.com] > Sent: Saturday, May 25, 2019 1:36 AM > To: dev@mxnet.incubator.apache.org; Lv, Tao A > Cc: d...@mxnet.apache.org > Subject: Re: [RFC] Su

Re: slack access

2019-05-25 Thread Marco de Abreu
Hi, Welcome to MXNet! Please note that you have to subscribe to dev@ first or otherwise your emails won't go through. -Marco Bossi, Marcelo schrieb am Fr., 24. Mai 2019, 22:37: > Hi Marc, > > > > I believe you need to send a Slack request to: > dev@mxnet.incubator.apache.org. > > > > Please

RE: [RFC] Support for creation of Large Tensors in MXNet

2019-05-25 Thread Lv, Tao A
...@mxnet.apache.org Subject: Re: [RFC] Support for creation of Large Tensors in MXNet Hi Sheng, Thanks for the nice suggestions. To summarize the current status and future plan of this project: There were some missing operators from #11742 that did not support large tensors. Thanbks to Rohit's help

Re: Regarding MXNET tutorials - Roadmap topics

2019-05-24 Thread Satish Gopalakrishna
Thanks Steffen and Aaron for the response. I will look into the links provided. Sincere thanks and have a great weekend. Regards, Satish On Fri, May 24, 2019 at 8:37 PM Steffen Rochel wrote: > Hi Satish - thanks for your interest in MXNET. Suggest you look at > MXNet.io for updates and

Re: CUDA recommendation

2019-05-24 Thread Jake Lee
Hi Aaron, The performance regression that Sam mentioned above is this one [1]. Since CUDA 10.1 doesn’t have the regression, Nvidia suggested us to move to CUDA 10.1. Regarding documentation. I will raised a PR to update it. Thanks. Jake [1] https://github.com/apache/incubator-mxnet/issues/14725

Re: CUDA recommendation

2019-05-24 Thread Aaron Markham
Sounds like we need an * near the CUDA 10.1 recommendation if there are known performance issues. Is there a particular issue # tracking the performance issues? I'm seeing a CUDA 10 Windows issue here that seems unresolved: https://github.com/apache/incubator-mxnet/issues/14479 On Fri, May 24,

Re: Regarding MXNET tutorials - Roadmap topics

2019-05-24 Thread Steffen Rochel
Hi Satish - thanks for your interest in MXNET. Suggest you look at MXNet.io for updates and instructions to join discussion forum and dev list. These are all good places for questions and contributions. Best Steffen On Fri, May 24, 2019 at 2:21 PM Satish Gopalakrishna wrote: > Sir, > > This is

Re: CUDA recommendation

2019-05-24 Thread Sheng Zha
10.1 is recommended. The oldest CUDA version that we release is 8.0. -sz On 2019/05/24 23:29:38, Marco de Abreu wrote: > While we are at the topic, did we actually agree on dropping support for > some versions? So far we are releasing all the way been to cuda 7.5 I think > > -Marco > >

Re: CUDA recommendation

2019-05-24 Thread Marco de Abreu
While we are at the topic, did we actually agree on dropping support for some versions? So far we are releasing all the way been to cuda 7.5 I think -Marco Skalicky, Sam schrieb am Fr., 24. Mai 2019, 23:43: > Hi Aaron > > Right now, the most stable version is CUDA 9.2. CUDA 10 is supported and

Re: CUDA recommendation

2019-05-24 Thread Skalicky, Sam
Hi Aaron Right now, the most stable version is CUDA 9.2. CUDA 10 is supported and some pip wheels are available, but there are known performance issues. And we are quickly moving to CUDA 10.1. So things are still in flux now. I think the best approach would be to wait a couple more weeks

Re: slack access

2019-05-24 Thread Aaron Markham
I sent you all invites. Join #mxnet once you're in! On Fri, May 24, 2019 at 1:37 PM Bossi, Marcelo wrote: > Hi Marc, > > > > I believe you need to send a Slack request to: > dev@mxnet.incubator.apache.org. > > > > Please let me know if you don’t get access by mid next week. I will track > it

Re: slack access

2019-05-24 Thread Bossi, Marcelo
Hi Marc, I believe you need to send a Slack request to: dev@mxnet.incubator.apache.org. Please let me know if you don’t get access by mid next week. I will track it down for you. Thanks, Marcelo Bossi Technical Business Developer @AWS AI – Apache MXNet

Re: [RFC] Support for creation of Large Tensors in MXNet

2019-05-24 Thread Lin Yuan
t; > Thanks, > > -tao > > > > -Original Message- > > From: Srivastava, Rohit Kumar [mailto: > srivastava@buckeyemail.osu.edu] > > Sent: Sunday, May 19, 2019 7:23 AM > > To: dev@mxnet.incubator.apache.org > > Subject: Re:

Re: [Announcement] New Committer - Yuxi Hu

2019-05-24 Thread Marco de Abreu
tulations, Darren :) Thanks for your great works in Horovod. > > > > > > > -Original Message- > > > > From: Chaitanya Bapat [mailto:chai.ba...@gmail.com] > > > > Sent: Friday, May 24, 2019 9:46 AM > > > > To: dev@mxnet.incubator.apac

Re: [Announcement] New Committer - Yuxi Hu

2019-05-24 Thread Lin Yuan
- > > > From: Chaitanya Bapat [mailto:chai.ba...@gmail.com] > > > Sent: Friday, May 24, 2019 9:46 AM > > > To: dev@mxnet.incubator.apache.org > > > Subject: Re: [Announcement] New Committer - Yuxi Hu > > > > > > Congratulations Dar

Re: [Announcement] New Committer - Yuxi Hu

2019-05-24 Thread Aaron Markham
2019 9:46 AM > > To: dev@mxnet.incubator.apache.org > > Subject: Re: [Announcement] New Committer - Yuxi Hu > > > > Congratulations Darren! > > > > On Fri, 24 May, 2019, 12:51 AM Sheng Zha, wrote: > > > > > Hi all, > > > > > > Please join me in welco

Re: [Announcement] New Committer - Aston Zhang

2019-05-24 Thread Aaron Markham
Congrats Aston. On Thu, May 23, 2019, 18:45 Chaitanya Bapat wrote: > Congratulations Aston! Your book is fantastic! > > On Fri, 24 May, 2019, 12:50 AM Sheng Zha, wrote: > > > Hi all, > > > > Please join me in welcoming Aston Zhang as a new committer of Apache > MXNet > > (incubating)! > > > >

RE: [Announcement] New Committer - Yuxi Hu

2019-05-23 Thread Zhao, Patric
Congratulations, Darren :) Thanks for your great works in Horovod. > -Original Message- > From: Chaitanya Bapat [mailto:chai.ba...@gmail.com] > Sent: Friday, May 24, 2019 9:46 AM > To: dev@mxnet.incubator.apache.org > Subject: Re: [Announcement] New Comm

Re: [Announcement] New Committer - Aston Zhang

2019-05-23 Thread Chaitanya Bapat
Congratulations Aston! Your book is fantastic! On Fri, 24 May, 2019, 12:50 AM Sheng Zha, wrote: > Hi all, > > Please join me in welcoming Aston Zhang as a new committer of Apache MXNet > (incubating)! > > Aston has been quite active in helping the community grow. Moreover, he > helps create the

Re: [Announcement] New Committer - Yuxi Hu

2019-05-23 Thread Chaitanya Bapat
Congratulations Darren! On Fri, 24 May, 2019, 12:51 AM Sheng Zha, wrote: > Hi all, > > Please join me in welcoming Yuxi (Darren) Hu as a new committer of Apache > MXNet (incubating)! > > Yuxi has been one of the core contributors of Horovod integration in > MXNet. Along the way, he has > been

Re: [Announcement] New Committer - Kedar Bellare

2019-05-23 Thread Marco de Abreu
Welcome! On Thu, May 23, 2019 at 6:49 PM Lin Yuan wrote: > Welcome on board! > > Lin > > On Thu, May 23, 2019 at 9:01 AM Carin Meier wrote: > > > Please join me in welcoming Kedar Belllare > https://github.com/kedarbellare > > as > > a new committer. > > > > Kedar has worked on the Clojure

Re: [Announcement] New Committer - Kedar Bellare

2019-05-23 Thread Lin Yuan
Welcome on board! Lin On Thu, May 23, 2019 at 9:01 AM Carin Meier wrote: > Please join me in welcoming Kedar Belllare https://github.com/kedarbellare > as > a new committer. > > Kedar has worked on the Clojure package and helped improve it by porting > the Scala image and infer functionality

Re: [DISCUSS] 1.5.0 Release Plan

2019-05-23 Thread Lin Yuan
. > > Feel free to let me know if anything our team can help :) > > BR, > > --Patric > > > -Original Message- > > From: Lai Wei [mailto:roywei...@gmail.com] > > Sent: Thursday, May 23, 2019 6:05 AM > > To: dev@mxnet.incubator.apache.org > &g

RE: [DISCUSS] 1.5.0 Release Plan

2019-05-23 Thread Zhao, Patric
> Sent: Thursday, May 23, 2019 6:05 AM > To: dev@mxnet.incubator.apache.org > Subject: Re: [DISCUSS] 1.5.0 Release Plan > > Hi @dev, > > Thanks for working hard for the 1.5 release, since there has been several > release blockers (mostly fixed). We are extending the code freeze to

Re: Dependency Update

2019-05-22 Thread Jake Lee
Thanks Aaron that's a great suggestion. The reason why I put it under tools/dependencies is that the doc is intended for developers who want to contribe to update the dependencies of our PyPI package. Regarding the CI, I'm also working on upgrading CUDA/cuDNN version that CI use -PR[1][2].

Re: Dependency Update

2019-05-22 Thread Aaron Markham
Thanks for doing a thorough look at the version ranges. I have this PR [1] waiting for review that tries to pin graphviz and opencv, and it updates CI as well as the docs that go on the website. I think your updates would be beneficial in the docs that go on the website and should also update CI.

Re: [DISCUSS] 1.5.0 Release Plan

2019-05-22 Thread Lai Wei
; [3] https://github.com/apache/incubator-mxnet/issues/14203 > > > [4] https://github.com/apache/incubator-mxnet/issues/14085 > > > [5] https://github.com/apache/incubator-mxnet/pull/14877 > > > [6] https://github.com/dmlc/mshadow/pull/374 > > > [7] https://github.com/ap

Re: Dependency Update

2019-05-22 Thread Qing Lan
Great work Jake! The content on CPU/GPU build instruction is really helpful. Thanks, Qing From: Jake Lee Sent: Wednesday, May 22, 2019 17:26 To: dev@mxnet.incubator.apache.org Subject: Dependency Update Dear Community, I have been working on dependency udpate

Re: warnings as errors

2019-05-22 Thread Pedro Larroy
I was not able to fix the warnings on mshadow type switch with unused local typedefs, that's one example of warning that I would disable. I couldn't find a way to solve that one and I think the ramifications of an unused typedef are not likely to cause bugs in the code and are more of a pedantic

Re: [Discussion] Remove bundled llvm OpenMP

2019-05-22 Thread Anton Chernov
We are now waiting for a committer's review and merge. ср, 22 мая 2019 г. в 22:14, Pedro Larroy : > Thanks Aaron and Anton! Can we rebase to update the PR? Let me know > how can I help further if you find some problems. > > On Wed, May 22, 2019 at 6:49 AM Aaron Markham > wrote: > > > > I

Re: [Discussion] Remove bundled llvm OpenMP

2019-05-22 Thread Anton Chernov
Great! Thank you, Aaron. I have rebased it. ср, 22 мая 2019 г. в 15:49, Aaron Markham : > I reopened it for you. > > On Wed, May 22, 2019, 05:25 Anton Chernov wrote: > > > I don't have necessary rights to reopen this PR. > > > > пн, 20 мая 2019 г. в 08:00, Pedro Larroy : > > > > > Hi Anton,

Re: [Discussion] Remove bundled llvm OpenMP

2019-05-22 Thread Pedro Larroy
Thanks Aaron and Anton! Can we rebase to update the PR? Let me know how can I help further if you find some problems. On Wed, May 22, 2019 at 6:49 AM Aaron Markham wrote: > > I reopened it for you. > > On Wed, May 22, 2019, 05:25 Anton Chernov wrote: > > > I don't have necessary rights to

Re: Report of MXNet NumPy Project Status

2019-05-22 Thread Junru Shao
 Nice progress Jun! On Wed, May 22, 2019 at 12:12 AM Jun Wu wrote: > Dear Community, > > A few months ago, we submitted this RFC > proposing > introducing NumPy-compatible coding experience into MXNet. As it has been > some time since

Re: [Discussion] Remove bundled llvm OpenMP

2019-05-22 Thread Anton Chernov
I don't have necessary rights to reopen this PR. пн, 20 мая 2019 г. в 08:00, Pedro Larroy : > Hi Anton, Stas. > > Can we reopen this PR and get it merged as per the data collected by Stas? > > https://github.com/apache/incubator-mxnet/pull/12160 > > >

Re: Report of MXNet NumPy Project Status

2019-05-22 Thread Pedro Larroy
Thanks, that's a nice summary. Great job and good to know the progress. I think we can do some exciting stuff in terms of parsing the Python AST and converting to a computational graph. Maybe we could brainstorm on that further on the linked ticket. On Wed, May 22, 2019 at 12:12 AM Jun Wu wrote:

Re: New PMC member: Dick Carter

2019-05-21 Thread Pedro Larroy
Finally! Welcome! On Tue, May 21, 2019 at 6:28 PM Steffen Rochel wrote: > > Congratulation Dick! > > On Tue, May 21, 2019 at 2:43 PM Carin Meier wrote: > > > Congrats and welcome! > > > > On Tue, May 21, 2019 at 4:37 PM Marco de Abreu > > wrote: > > > > > The Project Management Committee (PMC)

Re: DGL crashes in the recent master branch

2019-05-21 Thread Da Zheng
Yes, I created an issue in MXNet github: https://github.com/apache/incubator-mxnet/issues/15029, which shows a piece of small code that reproduces the bug. The bug should be related to this PR: https://github.com/apache/incubator-mxnet/pull/14570. Another update in the progress is that the DLPack

Re: DGL crashes in the recent master branch

2019-05-21 Thread Chris Olivier
Might be helpful if you wrote a unit test for this and other behaviors that DGL depends upon to reduce the likelihood that it happens again. Just a suggestion. That would show good ownership, imho. On Tue, May 21, 2019 at 6:11 PM Chris Olivier wrote: > Thanks for clarifying, Da. > > On Tue,

Re: DGL crashes in the recent master branch

2019-05-21 Thread Chris Olivier
Thanks for clarifying, Da. On Tue, May 21, 2019 at 5:44 PM Zheng, Da wrote: > DGL is a framework of deep learning on graphs. https://www.dgl.ai/ > > It's not that MXNet is responsible to be compatible with DGL. The crashes > are caused by bugs in MXNet. > > Best, > Da > > On 5/21/19, 5:39 PM,

Re: DGL crashes in the recent master branch

2019-05-21 Thread Zheng, Da
DGL is a framework of deep learning on graphs. https://www.dgl.ai/ It's not that MXNet is responsible to be compatible with DGL. The crashes are caused by bugs in MXNet. Best, Da On 5/21/19, 5:39 PM, "Chris Olivier" wrote: Curious what is DGL and what is Apache/MXNet’s responsibility to

Re: DGL crashes in the recent master branch

2019-05-21 Thread Chris Olivier
Curious what is DGL and what is Apache/MXNet’s responsibility to it to maintain compatibility rather than the other way around? On Tue, May 21, 2019 at 3:39 PM Zheng, Da wrote: > Hello all, > > I recently find that DGL don’t run with the recent MXNet. DGL crashes with > memory errors. >

Re: warnings as errors

2019-05-21 Thread Sheng Zha
It would be great to enforce the check for warnings and treat as errors. Some questions I have: - what are the warnings that you think should be ignored? - for the rest of the warning types, can we turn them on one by one? -sz On 2019/05/21 22:33:51, Pedro Larroy wrote: > Hi dev@ > > I try

Re: [ANNOUNCEMENT] New Committer: Przemyslaw Tredak (ptrendx)

2019-05-21 Thread Marco de Abreu
Welcome! On Tue, May 21, 2019 at 11:48 PM Carin Meier wrote: > Welcome! > > On Tue, May 21, 2019 at 5:32 PM Naveen Swamy wrote: > > > The Project Podling Management Committee (PPMC) for Apache MXNet has > > invited Przemyslaw Tredak (ptrendx) based on his contribution to MXNet to > > become a

Re: [ANNOUNCEMENT] New Committer: Przemyslaw Tredak (ptrendx)

2019-05-21 Thread Carin Meier
Welcome! On Tue, May 21, 2019 at 5:32 PM Naveen Swamy wrote: > The Project Podling Management Committee (PPMC) for Apache MXNet has > invited Przemyslaw Tredak (ptrendx) based on his contribution to MXNet to > become a committer and we are pleased to announce that he has accepted. > >

Re: New PMC member: Dick Carter

2019-05-21 Thread Carin Meier
Congrats and welcome! On Tue, May 21, 2019 at 4:37 PM Marco de Abreu wrote: > The Project Management Committee (PMC) for Apache MXNet > has invited Dick Carter to become a PMC member and we are pleased > to announce that he has accepted. > > Dick has been a great help over the past years to

Re: [Discussion] Remove bundled llvm OpenMP

2019-05-20 Thread Pedro Larroy
Hi Anton, Stas. Can we reopen this PR and get it merged as per the data collected by Stas? https://github.com/apache/incubator-mxnet/pull/12160 https://cwiki.apache.org/confluence/display/MXNET/Benchmarking+MXNet+with+different+OpenMP+implementations There are multiple issues that will be

Re: [RFC] Support for creation of Large Tensors in MXNet

2019-05-18 Thread Sheng Zha
iddle of year. But I'm not sure if > MXNet has plan to support that. > > Thanks, > -tao > > -Original Message- > From: Srivastava, Rohit Kumar [mailto:srivastava@buckeyemail.osu.edu] > Sent: Sunday, May 19, 2019 7:23 AM > To: de

RE: [RFC] Support for creation of Large Tensors in MXNet

2019-05-18 Thread Lv, Tao A
:23 AM To: dev@mxnet.incubator.apache.org Subject: Re: [RFC] Support for creation of Large Tensors in MXNet Hi Tao, There are already couple of operators implemented in MXNet that are currently supporting Tensors with size over ~4.5 billion. In the meantime core MXNet can move ahead

Re: [RFC] Support for creation of Large Tensors in MXNet

2019-05-18 Thread Srivastava, Rohit Kumar
ause issue there. To cover more cases, MKL-DNN is going to support INT64 dimension size in its coming 1.0 major release. -tao -Original Message- From: Lin Yuan [mailto:apefor...@gmail.com] Sent: Tuesday, April 30, 2019 12:56 AM To: dev@mxnet.incubator.apache.org S

Re: [Proposal] New operator graph for MXNet

2019-05-17 Thread Pedro Larroy
should or should not use NNVM2 in the future. But this is not something that should be sneaked into MXNet through a sub-repository without discussion, planning and proper testing. I have extensively (re)read through Relay, TVM papers, including it's references. As it stands today, the goals of the TVM

Re: [DISCUSS] 1.5.0 Release Plan

2019-05-15 Thread Junru Shao
ator-mxnet/issues/14203 > > [4] https://github.com/apache/incubator-mxnet/issues/14085 > > [5] https://github.com/apache/incubator-mxnet/pull/14877 > > [6] https://github.com/dmlc/mshadow/pull/374 > > [7] https://github.com/apache/incubator-mxnet/pull/14952 > >

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Junru Shao
I do want to mention some points that I believe I > > > > should mention. > > > > > > > > While I agree with Tianqi that every design has its pros and cons, I > > > would > > > > love to emphasize that a *good taste* of system design is to opt

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Tianqi Chen
ile I agree with Tianqi that every design has its pros and cons, I > > > would > > > > love to emphasize that a *good taste* of system design is to optimize > > the > > > > bottleneck, enhance expressiveness (and usability), i.e. to do what > > needs > &

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Pedro Larroy
Hi Thanks for all the materials and keypoints raised. The discussion has many ramifications, I will think about them and research them very carefully before replying further. Please also don't quickly dismiss the points I have raised and reduce them to typed vs untyped or pedantic C++ comments,

Re: Python2 End of Life

2019-05-15 Thread Damien Stanton
+1 Standardizing on Python 3 will make things easier for both MXNet devs as well as users. On Wed, May 15, 2019 at 2:49 PM sandeep krishnamurthy < sandeep.krishn...@gmail.com> wrote: > +1 Thanks for bringing this up Zach. > Can we include this intent to deprecate support for Python 2, in the >

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Junru Shao
either > > performance > > > or expressiveness. Generally speaking, typed or untyped, shared_ptr or > > > unique_ptr, won't affect the overall performance when it comes to deep > > > learning workload, specially when we have an async scheduler that does > > good &g

Re: [DISCUSS] 1.5.0 Release Plan

2019-05-15 Thread Anirudh Subramanian
; [6] https://github.com/dmlc/mshadow/pull/374 > [7] https://github.com/apache/incubator-mxnet/pull/14952 > > -Original Message- > From: Lai Wei [mailto:roywei...@gmail.com] > Sent: Wednesday, May 15, 2019 2:57 PM > To: dev@mxnet.incubator.apache.org > Subject: Re:

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Zach Kimberg
gt; > performance > > > or expressiveness. Generally speaking, typed or untyped, shared_ptr or > > > unique_ptr, won't affect the overall performance when it comes to deep > > > learning workload, specially when we have an async scheduler that does > > good > > &

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Naveen Swamy
r or > > unique_ptr, won't affect the overall performance when it comes to deep > > learning workload, specially when we have an async scheduler that does > good > > latency hiding in MXNet - to me, these are not major issues that are > worth > > re-designing our entir

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Anirudh Subramanian
c scheduler that does good > latency hiding in MXNet - to me, these are not major issues that are worth > re-designing our entire system. > > To benefit users - real-world ML practitioners, the most thing I would love > to mention is that dataflow graph-based representation is increasin

Re: Python2 End of Life

2019-05-15 Thread Zach Kimberg
The website I listed earlier (https://python3statement.org/) is backed by a git repository ( https://github.com/python3statement/python3statement.github.io) so that projects can open a PR to add themselves to the list. Beyond that, they also have a very nice timeline that projects can add

Re: TensorRT blocker

2019-05-15 Thread Per da Silva
Hey, Yup - I've @'ed you to the fix PR, would be great to get your 2c there just to be sure it's all good. https://github.com/apache/incubator-mxnet/pull/14960 Cheers, Per On Wed, May 15, 2019 at 4:14 PM Sunderland, Kellen wrote: > Looks like it's merged. Can I help with a fix Per? > > On

Re: TensorRT blocker

2019-05-15 Thread Sunderland, Kellen
Looks like it's merged. Can I help with a fix Per? On May 15, 2019 3:00 AM, Per da Silva wrote: Hi everyone, Could a committer please merge this PR: https://github.com/apache/incubator-mxnet/pull/14958 It disables the TensorRT steps to unblock CI while a fix is being worked on. Cheers, Per

Re: Python2 End of Life

2019-05-15 Thread Marco de Abreu
+1 I'd like to point out that one of our dependencies, scikit, already dropped support for python 2. If more dependencies drop support before 1.1.20, we might start running into further issues like we already did. As part of that decision, I'd propose to see what the detailed timelines of our

RE: [DISCUSS] 1.5.0 Release Plan

2019-05-15 Thread Lv, Tao A
- From: Lai Wei [mailto:roywei...@gmail.com] Sent: Wednesday, May 15, 2019 2:57 PM To: dev@mxnet.incubator.apache.org Subject: Re: [DISCUSS] 1.5.0 Release Plan Hi Anirudh, I see there was an offline disucssion <https://github.com/apache/incubator-mxnet/pull/14173#pullrequestreview-235846341>

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Junru Shao
or untyped, shared_ptr or unique_ptr, won't affect the overall performance when it comes to deep learning workload, specially when we have an async scheduler that does good latency hiding in MXNet - to me, these are not major issues that are worth re-designing our entire system. To benefit users - real

Re: [DISCUSS] 1.5.0 Release Plan

2019-05-15 Thread Lai Wei
Hi Anirudh, I see there was an offline disucssion and I have updated the AMP feature and your project on the release tracker ,

Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Tianqi Chen
The core part of the proposal is to move the graph to be much more strongly typed template class. I think this is mainly a point of engineering taste, and both sides have pros and cons, let me list them before I share my thoughts on this issue: - Typed fields certainly enjoy more compile-time

Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Pedro Larroy
Hi Tianqi I thought a bit more about your comments and I think there is a simple way to address your concerns that satisfies both needs. We can have a NodeAttributes template class which has a map of string to any as it's currenlty the case, so the graph can be used in the highly dynamic

Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Pedro Larroy
Hi Tianqi Thanks for the quick response. Could you point to examples where graph.h is being exposed which would not be possible with what I propose? I don't think my proposal is having any impact in language bindings, and the way I describe it doesn't affect having or not having higher language

Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Tianqi Chen
Thanks for the proposal. Let me share some of my thoughts: Specific comments on the proposal --- The heavy use of generic in the Graph type was a huge departure from type-erased data structure which was presented in the previous design. While we

Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Pedro Larroy
Hi Sheng Could you provide relevant links to Relay and what you would recommend to read so we have a focused discussion instead of me potentially me miss-searching? Probably I also missed the discussion or vote in the mail list regarding including TVM as a depedency or future plans on using

Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Sheng Zha
Hi Pedro, Thanks for taking the inititaive. Skimming through the design doc, I didn't see comparison with existing solutions such as relay in tvm, which is already a dependency of mxnet already. Could you elaborate on comparison with existing solutions in the design doc too? -sz On

Re: assimilation of mshadow into the MXNet codebase

2019-05-14 Thread Pedro Larroy
Hi Sheng. Do you need some help with this? Do we plan to have this for 1.5? Pedro. On Wed, Apr 24, 2019 at 4:26 PM Pedro Larroy wrote: > > Thanks. Great to read. > > On Wed, Apr 24, 2019 at 2:19 PM Sheng Zha wrote: > > > > The community has agreed to donate mshadow to the mxnet code base. I

Re: Python2 End of Life

2019-05-14 Thread Pedro Larroy
+1 Let python2 rest, let's simplify our infrastructure and need to support old Python versions. On Mon, May 13, 2019 at 1:58 PM Jake Lee wrote: > > +1 Recently I upgraded the Numpy version and found out that Pylint had > false alarm on it. The Pylint fix is only available on Python3. So I >

Re: [INVITATION] 14th of May 2019 / Berlin MXNet Recurring User Group Meeting

2019-05-14 Thread Per da Silva
Hey Wen-Yang Chu, Unfortunately, my talents are more on the CI/CD at the moment, so I don't know that I'll be able to answer your question. Is there anyone out there than can join us and shine some light in the situation? If no one is able to join, I'll try to understand your question and find

Re: [INVITATION] 14th of May 2019 / Berlin MXNet Recurring User Group Meeting

2019-05-14 Thread Wen-Yang Chu
Hi Per da Silva, I would like to join this meeting. I would like to ask about some solution of how to replace the depreciated "crop" layer properly. I found many have the same issue and I do not find a proper solution. It can be a real deal breaker for me despite I am really fond of MXNET and

Re: [Proposal] MXNet operator benchmark library

2019-05-13 Thread Marco de Abreu
Great proposal! sandeep krishnamurthy schrieb am Di., 14. Mai 2019, 04:45: > Hi Naveen, > > Thanks for your feedback and suggestions. I have updated the document > addressing the feedback and concerns, alternate solutions pros/cons are > added. >

Re: [Proposal] MXNet operator benchmark library

2019-05-13 Thread sandeep krishnamurthy
Hi Naveen, Thanks for your feedback and suggestions. I have updated the document addressing the feedback and concerns, alternate solutions pros/cons are added. https://cwiki.apache.org/confluence/display/MXNET/MXNet+Operator+Benchmarks Best, Sandeep On Mon, May 13, 2019 at 1:20 PM Naveen Swamy

Re: [Announcement] New Committer - Zach Kimberg

2019-05-13 Thread Pedro Larroy
Congratulations On Thu, May 9, 2019 at 11:29 AM Chaitanya Bapat wrote: > > Congratulations Zachary! Way to go! > > On Thu, 9 May 2019 at 14:01, Carin Meier wrote: > > > Congrats! > > > > On Thu, May 9, 2019 at 1:41 PM Per da Silva wrote: > > > > > Nice one! Congratulations =) > > > > > > On

Re: Python2 End of Life

2019-05-13 Thread Jake Lee
+1 Recently I upgraded the Numpy version and found out that Pylint had false alarm on it. The Pylint fix is only available on Python3. So I changed the default python version of 'make pylint' command to python3 (PR haven't been merged). It's time to drop support for Python2. On Mon, May 13, 2019

Re: Python2 End of Life

2019-05-13 Thread Yuan Tang
+1 On Mon, May 13, 2019 at 4:37 PM Junru Shao wrote: > +1 > > On Mon, May 13, 2019 at 1:34 PM Aaron Markham > wrote: > > > +1 for the pledge and to start moving things to Python 3. > > I think our installation instructions and tutorials can be updated to > > default to Python3 and we should

<    8   9   10   11   12   13   14   15   16   17   >