Re: [Discuss] MXNet Python 2 Support Deprecation

2019-07-19 Thread Chaitanya Bapat
+1 definitely.

Going forward,
MXNet repo as it stands has ~95,000+ lines of Python code [1]
OpenEdx has a million (10x) LOC and this mammoth effort of porting from
Python 2 to 3 is treated as a separate project named Incremental
Improvement. [2]
We can take inspiration from them and have a similar effort by calling
action from the community. Issues can be maintained in a separate JIRA
board to track high priority tasks.

Also, I can see gluon-nlp adding themselves to the Python3 statement. Once
the vote passes, one of us could submit a PR to add MXNet as well.

[1] https://codeclimate.com/
[2]
https://open.edx.org/blog/python-2-is-ending-we-need-to-move-to-python-3/


On Thu, 18 Jul 2019 at 21:39, Kshitij Kalambarkar <
kshitijkalambar...@gmail.com> wrote:

> +1
>
> On Fri, Jul 19, 2019, 04:28 Pedro Larroy 
> wrote:
>
> > Seems 3.6 is a reasonable choice.
> >
> > On Thu, Jul 18, 2019 at 2:15 PM Marco de Abreu 
> > wrote:
> > >
> > > Looking at EOL is certainly a good idea! I think once we get closer to
> > > deprecation, we can check adoption statistics to make a well-informed
> > > decision that gives us the most advantages without dropping the ball
> on a
> > > majority of users (or supporting a branch that is going EOL soon). A
> > survey
> > > from 2018 [1] determined the following distribution:
> > > 3.5: 11%
> > > 3.6: 54%
> > > 3.7: 30%
> > >
> > > Deprecation for 3.5 is scheduled for 2020-09-13 [2]. Deprecation for
> 3.6
> > is
> > > scheduled for 2021-12-23 [2].Deprecation for 3.7 is scheduled
> > > for 2023-06-27 [2].
> > >
> > > Following the trend, I'd say that it would be a decision between Python
> > 3.6
> > > and 3.7. Later on, I'd propose to check recent surveys and also have a
> > > separate thread to determine if there's anything we're missing (e.g. a
> > big
> > > company being unable to use Python 3.7). What do you think?
> > >
> > > Best regards,
> > > Marco
> > >
> > > [1]: https://www.jetbrains.com/research/python-developers-survey-2018/
> > > [2]: https://devguide.python.org/#status-of-python-branches
> > >
> > > On Thu, Jul 18, 2019 at 9:42 PM Yuan Tang 
> > wrote:
> > >
> > > > I would suggest supporting Python 3.5+ since the earlier versions
> have
> > > > reached end-of-life status:
> > > > https://devguide.python.org/devcycle/#end-of-life-branches
> > > >
> > > > On Thu, Jul 18, 2019 at 3:36 PM Pedro Larroy <
> > pedro.larroy.li...@gmail.com
> > > > >
> > > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > This would simplify CI, reduce costs and more. I think a followup
> > > > > question is what would be the mininum Python3 version supported?
> > > > > Depending on that we might be able to use type annotations for
> > example
> > > > > or other features.
> > > > >
> > > > > Pedro.
> > > > >
> > > > > On Thu, Jul 18, 2019 at 12:07 PM Yuan Tang <
> terrytangy...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > +1
> > > > > >
> > > > > > On Thu, Jul 18, 2019 at 2:51 PM Yuxi Hu 
> > wrote:
> > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > > On Thu, Jul 18, 2019 at 11:31 AM Tong He 
> > > > wrote:
> > > > > > >
> > > > > > > > +1
> > > > > > > >
> > > > > > > > Best regards,
> > > > > > > >
> > > > > > > > Tong He
> > > > > > > >
> > > > > > > >
> > > > > > > > Jake Lee  于2019年7月18日周四 上午11:29写道:
> > > > > > > >
> > > > > > > > > +1
> > > > > > > > >
> > > > > > > > > On Thu, Jul 18, 2019 at 11:27 AM Junru Shao <
> > > > > junrushao1...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > +1
> > > > > > > > > >
> > > > > > > > > > On Thu, Jul 18, 2019 at 11:12 AM Anirudh Acharya <
> > > > > > > > anirudhk...@gmail.com>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > +1
> > > > > > > > > > >
> > > > > > > > > > > On Thu, Jul 18, 2019 at 11:03 AM Marco de Abreu <
> > > > > > > > > marco.g.ab...@gmail.com
> > > > > > > > > > >
> > > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > +1
> > > > > > > > > > > >
> > > > > > > > > > > > -Marco
> > > > > > > > > > > >
> > > > > > > > > > > > Sheng Zha  schrieb am Do., 18.
> > Juli
> > > > > 2019,
> > > > > > > > > 19:59:
> > > > > > > > > > > >
> > > > > > > > > > > > > Dear MXNet community,
> > > > > > > > > > > > >
> > > > > > > > > > > > > I'd like to reopen the discussion on deprecating
> > python2
> > > > > > > support.
> > > > > > > > > > This
> > > > > > > > > > > > > would help modernize the design and engineering
> > practice
> > > > in
> > > > > > > MXNet
> > > > > > > > > to
> > > > > > > > > > > help
> > > > > > > > > > > > > improve speed and quality.
> > > > > > > > > > > > >
> > > > > > > > > > > > > For this purpose, I reopened the issue on this
> here:
> > > > > > > > > > > > >
> > https://github.com/apache/incubator-mxnet/issues/8703
> > > > > > > > > > > > >
> > > > > > > > > > > > > If the consensus is towards the direction of
> dropping
> > > > > python2
> > > > > > > > > > support,
> > > > > > > > > > > I
> > > > > > > > > > > > > suggest we announce 

Re: [Announcement] New Committer - Aston Zhang

2019-05-23 Thread Chaitanya Bapat
Congratulations Aston! Your book is fantastic!

On Fri, 24 May, 2019, 12:50 AM Sheng Zha,  wrote:

> Hi all,
>
> Please join me in welcoming Aston Zhang as a new committer of Apache MXNet
> (incubating)!
>
> Aston has been quite active in helping the community grow. Moreover, he
> helps create the book "Dive
> into Deep Learning" [1], which is great interactive material for
> introduction of deep learning,
> developed in MXNet.
>
> Welcome, Aston!
>
> -sz
>
> [1] http://d2l.ai
>


Re: [Announcement] New Committer - Yuxi Hu

2019-05-23 Thread Chaitanya Bapat
Congratulations Darren!

On Fri, 24 May, 2019, 12:51 AM Sheng Zha,  wrote:

> Hi all,
>
> Please join me in welcoming Yuxi (Darren) Hu as a new committer of Apache
> MXNet (incubating)!
>
> Yuxi has been one of the core contributors of Horovod integration in
> MXNet. Along the way, he has
> been making meaningful contributions to improve the mxnet backend, such as
> introducing API for
> engine push to make it easier to integrate horovod and external operator
> library.
>
> Welcome, Darren!
>
> -sz
>
>


Re: [DISCUSS] AWS Credits for External Contributors

2019-05-09 Thread Chaitanya Bapat
Sure. I'll use the AWS Educate route (Google Colab or AWS SageMaker would
be great for an MXNet User, but I wanted to build and test. Moreover, for
memory profiling, need access to an instance with GPU more than anything
else). But anyway, I'll use AWS Educate.

Thanks for the quick response.

On Thu, 9 May 2019 at 19:08, Aaron Markham 
wrote:

> One option is Amazon Educate. https://aws.amazon.com/education/awseducate/
> Last I checked, you can get $75/month AWS credit as a student or
> educator. If you belong to an educational organization, your org can
> apply on your behalf and get anyone with that org's domain easier
> access to the credits. Or something like that.
>
> Another route is you might be able to load your test/work into a
> notebook and run it on Google Colab. Vandana has this neat DCGAN with
> MXNet notebook running there.
>
> https://colab.research.google.com/github/vandanavk/mxnet-gluon-gan/blob/dcgan/dcgan/dcgan.ipynb
>
> Will either of those work for you?
>
> Cheers,
> Aaron
>
> On Thu, May 9, 2019 at 11:30 AM Chaitanya Bapat 
> wrote:
> >
> > Hello MXNet community,
> >
> > I was curious to know if there is any possibility of AWS Credits
> > provisioned for external contributors of Apache MXNet. It would be a
> great
> > incentive for more external contributions and in turn more external
> > contributors.
> >
> > Background -
> > Today, while trying to work on Anirudh's Memory profiling for MXNet PR, I
> > realized I am short of AWS credits on my personal account. My personal
> > computer (Mac 2017) doesn't have Nvidia GPU and hence I'm a bit stuck.
> >
> > I don't know if there are others who have faced a similar situation. If
> > that's the case, maybe we can find a solution through free AWS Credits.
> >
> > Thanks,
> > Chai
> >
> > --
> > *Chaitanya Prakash Bapat*
> > *+1 (973) 953-6299*
> >
> > [image: https://www.linkedin.com//in/chaibapat25]
> > <https://github.com/ChaiBapchya>[image:
> https://www.facebook.com/chaibapat]
> > <https://www.facebook.com/chaibapchya>[image:
> > https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya
> >[image:
> > https://www.linkedin.com//in/chaibapat25]
> > <https://www.linkedin.com//in/chaibapchya/>
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
<https://github.com/ChaiBapchya>[image: https://www.facebook.com/chaibapat]
<https://www.facebook.com/chaibapchya>[image:
https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya>[image:
https://www.linkedin.com//in/chaibapat25]
<https://www.linkedin.com//in/chaibapchya/>


Re: Unable to comment on GitHub issue

2019-05-09 Thread Chaitanya Bapat
Any specific issues you could give the links to? So I could verify if
that's the case with me.

On Thu, 9 May 2019 at 14:44, Naveen Swamy  wrote:

> I am unable to comment on certain GitHub issues and see a locked Icon,
> wondering if anyone has experienced this and know why?
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



[DISCUSS] AWS Credits for External Contributors

2019-05-09 Thread Chaitanya Bapat
Hello MXNet community,

I was curious to know if there is any possibility of AWS Credits
provisioned for external contributors of Apache MXNet. It would be a great
incentive for more external contributions and in turn more external
contributors.

Background -
Today, while trying to work on Anirudh's Memory profiling for MXNet PR, I
realized I am short of AWS credits on my personal account. My personal
computer (Mac 2017) doesn't have Nvidia GPU and hence I'm a bit stuck.

I don't know if there are others who have faced a similar situation. If
that's the case, maybe we can find a solution through free AWS Credits.

Thanks,
Chai

-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: [Announcement] New Committer - Zach Kimberg

2019-05-09 Thread Chaitanya Bapat
Congratulations Zachary! Way to go!

On Thu, 9 May 2019 at 14:01, Carin Meier  wrote:

> Congrats!
>
> On Thu, May 9, 2019 at 1:41 PM Per da Silva  wrote:
>
> > Nice one! Congratulations =)
> >
> > On Thu, May 9, 2019 at 7:38 PM Jake Lee  wrote:
> >
> > > Congrat!
> > >
> > > On Thu, May 9, 2019 at 10:37 AM Yuan Tang 
> > wrote:
> > >
> > > > Welcome!
> > > >
> > > > On Thu, May 9, 2019 at 1:36 PM Marco de Abreu <
> marco.g.ab...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Welcome!
> > > > >
> > > > > Hagay Lupesko  schrieb am Do., 9. Mai 2019,
> > 19:33:
> > > > >
> > > > > > Congratulations Zach - well deserved!
> > > > > >
> > > > > > On Thu, May 9, 2019, 13:26 Qing Lan  wrote:
> > > > > >
> > > > > > > Hi All,
> > > > > > >
> > > > > > > Please join me in welcoming Zach Kimberg (
> > > https://github.com/zachgk)
> > > > > as
> > > > > > a
> > > > > > > new committer.
> > > > > > >
> > > > > > > He has been solving some important bugs in MXNet JVM with
> respect
> > > to
> > > > > > usage
> > > > > > > improvement, build issues and a lot more. He also created the
> > > Jenkins
> > > > > > based
> > > > > > > publish pipeline for us to have standard way to build and test
> > > > > > > static-linked package conveniently for everyone in the
> community.
> > > > > > Moreover,
> > > > > > > he solved a bunch of License problems we have in MXNet and
> > brought
> > > > > > several
> > > > > > > fixes to let us get 1.4.0 release on time.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Qing
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: Fujitsu Breaks ImageNet Record using MXNet (under 75 sec)

2019-04-08 Thread Chaitanya Bapat
Yes. Moreover, we should be pushing it on our Twitter, Reddit, Medium, etc
social channels.

On Mon, 8 Apr 2019 at 15:55, Hagay Lupesko  wrote:

> That's super cool Chai - thanks for sharing!
> I also noticed that, and was seeing how we can reach out to the Fujitsu
> guys so they can contribute back into MXNet...
>
> On Mon, Apr 8, 2019 at 10:14 AM Lin Yuan  wrote:
>
> > Chai,
> >
> > Thanks for sharing. This is awesome news!
> >
> > Lin
> >
> > On Mon, Apr 8, 2019 at 8:48 AM Chaitanya Bapat 
> > wrote:
> >
> > > Greetings!
> > >
> > > Great start to a Monday morning, as I came across this news on Import
> AI,
> > > an AI newsletter.
> > >
> > > The newsletter talked about Apache MXNet, hence thought of sharing it
> > with
> > > our community. This seems to be a great achievement worth paying
> > attention
> > > to.
> > >
> > > *75 seconds: How long it takes to train a network against ImageNet:*
> > > *...Fujitsu Research claims state-of-the-art ImageNet training
> scheme...*
> > > Researchers with Fujitsu Laboratories in Japan have further reduced the
> > > time it takes to train large-scale, supervised learning AI models;
> their
> > > approach lets them train a residual network to around 75% accuracy on
> the
> > > ImageNet dataset after 74.7 seconds of training time. This is a big
> leap
> > > from where we were in 2017 (an hour), and is impressive relative to
> > > late-2018 performance (around 4 minutes: see issue #121
> > > <
> > >
> >
> https://twitter.us13.list-manage.com/track/click?u=67bd06787e84d73db24fb0aa5=28edafc07a=0b77acb987
> > > >
> > > ).
> > >
> > > *How they did it: *The researchers trained their system across *2,048
> > Tesla
> > > V100 GPUs* via the Amazon-developed MXNet deep learning framework. They
> > > used a large mini-batch size of 81,920, and also implemented layer-wise
> > > adaptive scaling (LARS) and a 'warming up' period to increase learning
> > > efficiency.
> > >
> > > *Why it matters:* Training large models on distributed infrastructure
> is
> > a
> > > key component of modern AI research, and the reduction in time we've
> seen
> > > on ImageNet training is striking - I think this is emblematic of the
> > > industrialization of AI, as people seek to create systematic approaches
> > to
> > > efficiently training models across large amounts of computers. This
> trend
> > > ultimately leads to a speedup in the rate of research reliant on
> > > large-scale experimentation, and can unlock new paths of research.
> > > *  Read more:* Yet Another Accelerated SGD: ResNet-50 Training on
> > ImageNet
> > > in 74.7 seconds (Arxiv)
> > > <
> > >
> >
> https://twitter.us13.list-manage.com/track/click?u=67bd06787e84d73db24fb0aa5=d2b13c879f=0b77acb987
> > > >
> > > .
> > >
> > > NVIDIA article -
> > >
> > >
> >
> https://news.developer.nvidia.com/fujitsu-breaks-imagenet-record-with-v100-tensor-core-gpus/
> > >
> > > Hope that gives further impetus to strive harder!
> > > Have a good week!
> > > Chai
> > >
> > >  --
> > > *Chaitanya Prakash Bapat*
> > > *+1 (973) 953-6299*
> > >
> > > [image: https://www.linkedin.com//in/chaibapat25]
> > > <https://github.com/ChaiBapchya>[image:
> > https://www.facebook.com/chaibapat
> > > ]
> > > <https://www.facebook.com/chaibapchya>[image:
> > > https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya
> > >[image:
> > > https://www.linkedin.com//in/chaibapat25]
> > > <https://www.linkedin.com//in/chaibapchya/>
> > >
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
<https://github.com/ChaiBapchya>[image: https://www.facebook.com/chaibapat]
<https://www.facebook.com/chaibapchya>[image:
https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya>[image:
https://www.linkedin.com//in/chaibapat25]
<https://www.linkedin.com//in/chaibapchya/>


Fujitsu Breaks ImageNet Record using MXNet (under 75 sec)

2019-04-08 Thread Chaitanya Bapat
Greetings!

Great start to a Monday morning, as I came across this news on Import AI,
an AI newsletter.

The newsletter talked about Apache MXNet, hence thought of sharing it with
our community. This seems to be a great achievement worth paying attention
to.

*75 seconds: How long it takes to train a network against ImageNet:*
*...Fujitsu Research claims state-of-the-art ImageNet training scheme...*
Researchers with Fujitsu Laboratories in Japan have further reduced the
time it takes to train large-scale, supervised learning AI models; their
approach lets them train a residual network to around 75% accuracy on the
ImageNet dataset after 74.7 seconds of training time. This is a big leap
from where we were in 2017 (an hour), and is impressive relative to
late-2018 performance (around 4 minutes: see issue #121

).

*How they did it: *The researchers trained their system across *2,048 Tesla
V100 GPUs* via the Amazon-developed MXNet deep learning framework. They
used a large mini-batch size of 81,920, and also implemented layer-wise
adaptive scaling (LARS) and a 'warming up' period to increase learning
efficiency.

*Why it matters:* Training large models on distributed infrastructure is a
key component of modern AI research, and the reduction in time we've seen
on ImageNet training is striking - I think this is emblematic of the
industrialization of AI, as people seek to create systematic approaches to
efficiently training models across large amounts of computers. This trend
ultimately leads to a speedup in the rate of research reliant on
large-scale experimentation, and can unlock new paths of research.
*  Read more:* Yet Another Accelerated SGD: ResNet-50 Training on ImageNet
in 74.7 seconds (Arxiv)

.

NVIDIA article -
https://news.developer.nvidia.com/fujitsu-breaks-imagenet-record-with-v100-tensor-core-gpus/

Hope that gives further impetus to strive harder!
Have a good week!
Chai

 --
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: MXNet Community Monthly Updates

2019-03-06 Thread Chaitanya Bapat
Hello Mu,

Thanks a lot for bringing this topic. I had thought about a Weekly Digest
for MXNet (weekly newsletter) - which is on similar lines (can be made into
Monthly if it sounds good).

Here's the quip doc -
https://chaitanya.quip.com/BT6RAcAigHM9/MXNet-Weekly-Digest

I have talked about the Background, Motivation, Features and a mockup of
Newsletter.

Would love to hear our community's thoughts as well as yours on the same.

Please find attached a snapshot of the Weekly digest I came up with.

Thanks,
Chai




On Wed, 6 Mar 2019 at 19:59, Mu Li  wrote:

> Dear Community,
>
> I propose to send a monthly summary to users to broadcast the recent
> progresses in the community. It will not only include new features added
> into MXNet, but also various community activities. Here is an example:
>
> Tutorials
> - 10 new lectures teaching at UC Berkeley
> - Video record for "Deploying with Java" at Java World 19
> Computer Vision
> - GluonCV 0.4 release supports pose estimation and improves 10 existing
> models
> - Insightface added a new model XY
> NLP
> - GluonNLP 0.5.1 release improves BERT training
> New Projects
> - A MXNet implementation for paper XY
> MXNet
> - Enhanced Java binding preview
> - Numpy frontend reaches milestone 1
> Incoming Events
> - Meetup at Palto Alto on 4/2
>
> The publishing procedure is we first create a draft wiki page so everyone
> will have a chance to review and add staffs. After that we will send it
> through an email list.
>
> I'm considering to use a 3rd party service such as mailchimp.com so that
> every user can subscribe it easily and we can do some marketing analysis as
> well. But I'm happy to re-use us...@mxnet.apache.org if it provides
> simliar
> functionalities.
>
> I'd like to hear your feedback for how to make the newletter more user
> friendely.
>
> Best
> Mu
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: [DISCUSS] Process to remove deprecated operators

2019-02-28 Thread Chaitanya Bapat
This sounds good.
Going further, if we can maintain a list of deprecated operators - we can
create a "Good for first contribution" issue to improve log messaging of
Deprecated operators.
If it makes sense, I can go ahead and create that.

Hope this helps.

On Thu, 28 Feb 2019 at 01:54, Lin Yuan  wrote:

> Agreed. When we deprecate an operator, we should add in the log message
> something like "This operator X is deprecate and will be removed in the
> next release. Please use operator Y instead."
>
> Lin
>
> On Wed, Feb 27, 2019 at 10:23 PM Junru Shao 
> wrote:
>
> > Hi Lin,
> >
> > I would love to share some immature ideas about deprecating operators.
> Not
> > only adopting semantic versioning, but also should we provide enough
> > informative error message for customers to understand how to replace
> > deprecated operators with new ones.
> >
> > Thanks,
> > Junru
> >
> > On Wed, Feb 27, 2019 at 9:30 PM Lin Yuan  wrote:
> >
> > > Sheng,
> > >
> > > Thanks for your quick response.
> > > If that's the case, we will wait till 2.0 release to remove the
> > deprecated
> > > operators from code.
> > >
> > > Best,
> > > Lin
> > >
> > > On Wed, Feb 27, 2019 at 9:06 PM Sheng Zha  wrote:
> > >
> > > > MXNet follows semantic versioning so we will be able to delete them
> in
> > > the
> > > > next major release.
> > > >
> > > > -sz
> > > >
> > > > On Wed, Feb 27, 2019 at 8:53 PM Lin Yuan 
> wrote:
> > > >
> > > > > Dear Community,
> > > > >
> > > > > In MXNet there are many legacy operators such as this
> > > > > <
> > > > >
> > > >
> > >
> >
> http://mxnet.incubator.apache.org/versions/master/api/python/symbol/symbol.html?highlight=convolution_v1#mxnet.symbol.Convolution_v1
> > > > > >
> > > > > that has been marked DEPRECATE for several releases. However, these
> > > > > operators still exist in our code. This caused a few problems:
> > > > >
> > > > > 1) Make the codebase bloated and reduce readability
> > > > > 2) Increase unnecessary maintanence effort
> > > > > 3) Bug prone as some people will look up these legacy code as
> example
> > > > > 4) Cause confusion to end users and make documentation page lengthy
> > > > >
> > > > > I would like to propose the following process (if there is no
> > existing
> > > > one)
> > > > > to remove deprecate operators from our code base.
> > > > >
> > > > > 1. Documnent the deprecate operators/environment variables in the
> > > release
> > > > > note as well as man pages.
> > > > > 2. Limit the life cycle of deprecate operators/argument to two
> minor
> > > > > release. For example, if one operator is marked deprecate in 1.4
> > > release,
> > > > > it will be removed in 1.6 release.
> > > > > 3. If there is some concern raised from customers during 1.4 and
> 1.5
> > > > > release, we can convert the deprecated operator back to current and
> > it
> > > > will
> > > > > be treated as new operator.
> > > > > 4. PRs that remove deprecate operators should contain [Cleanup] in
> > > title.
> > > > >
> > > > > Any comment is appreciated.
> > > > >
> > > > > Lin
> > > > >
> > > >
> > >
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: [Announcement] New Committer -- Lin Yuan

2019-02-02 Thread Chaitanya Bapat
Congratulations Lin! Way to go!

On Sat, 2 Feb 2019 at 19:39, sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> Welcome Lin :-)
>
> On Sat, Feb 2, 2019, 3:28 PM Yuan Tang 
> > Welcome Lin!
> >
> > On Sat, Feb 2, 2019 at 6:27 PM Tianqi Chen 
> > wrote:
> >
> > > Dear Community:
> > >
> > > Please join me to welcome Lin Yuan(@apeforest) as a new committer of
> > > Apache(incubating) MXNet!
> > >
> > > He has contributed to various improvements, including better
> > compatibility
> > > of larger arrays across the codebase.
> > >
> > > Commits:
> > > https://github.com/apache/incubator-mxnet/commits?author=apeforest
> > >
> > >
> >
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+author%3Aapeforest
> > >
> > >
> > > Reviews:
> > > https://github.com/apache/incubator-mxnet/pulls?utf8=%
> > > E2%9C%93=reviewed-by%3Aapeforest
> > >
> > > dev@ activitivity
> > >
> https://lists.apache.org/list.html?*@mxnet.apache.org:lte=6M:Lin%20Yuan
> > >
> > > Tianqi
> > >
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



API change discussion to resolve inconsistencies in Gluon Model Zoo

2018-12-17 Thread Chaitanya Bapat
Hello everyone,

As a contributor to Apache MXNet project, I wanted to ask the developer
community a few questions pertaining to Gluon Model Zoo API.

1. With regards to APIs for networks like - mobile nets, densenets,
resnets, squeezenets (note the case sensitivity of these APIs), they are
inconsistent compared to alexnet and inceptions.

By inconsistency I mean, **kwargs instead of mentioning all the parameters
that are required by the function. (e.g. pretrained=False, ctx=cpu(0))

Stating just 1 function here for example -

   - mxnet.gluon.model_zoo.vision.mobilenet_v2_0_25(**kwargs)
   


2. What does the community feel about this? Should we resolve this or it's
right the way it is? (Since this was an API breaking change, it was best to
ask the community before submitting a PR with changes)

3. What is the difference between APIs bearing same names but those in
Titlecase vs lowercase
e.g. mxnet.gluon.model_zoo.vision.AlexNet(classes=1000, **kwargs)
vs
mxnet.gluon.model_zoo.vision.alexnet(pretrained=False, ctx=cpu(0),
root='/home/jenkins_slave/.mxnet/models', **kwargs)

For reference and to track the above inconsistency, I have created a Github
Issue #13661 

I would highly appreciate your replies to any or all of the above questions.
Thanks,
Chai


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Requesting slack access

2018-09-12 Thread Chaitanya Bapat
Hello,

Chaitanya here. Requesting slack access.
Thanks

-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Requesting slack access

2018-09-12 Thread Chaitanya Bapat
Requesting slack access

-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]