[Announce] Upcoming Apache MXNet (incubating) 1.4.0 release

2018-11-13 Thread Steffen Rochel
Dear MXNet community,
the agreed plan was to establish code freeze for 1.4.0 release today. As
the 1.3.1 patch release is still ongoing I suggest to post-pone the code
freeze to Friday 16th November 2018.

Sergey Kolychev has agreed to act as co-release manager for all tasks which
require committer privileges. If anybody is interested to volunteer as
release manager - now is the time to speak up. Otherwise I will manage the
release.

Regards,
Steffen


Re: [Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Lin Yuan
Thanks guys for your prompt actions. I am so impressed!

Lin

On Tue, Nov 13, 2018 at 5:33 PM Sheng Zha  wrote:

> I was in the middle of transferring all items labeled with "Feature" to the
> "Feature request" label when "Feature" label was deleted. I'm not sure who
> deleted the "Feature" label but it's gone now.
>
> -sz
>
> On Tue, Nov 13, 2018 at 5:05 PM Anirudh Acharya 
> wrote:
>
> > This issue was raised before here -
> >
> >
> https://lists.apache.org/thread.html/3e988e6bd82cb2d69ba20c21bf763952ed22a5732e61f6fba1f89ac8@%3Cdev.mxnet.apache.org%3E
> >
> > We need someone with committer privileges to fix it.
> >
> >
> > Thanks
> > Anirudh
> >
> >
> >
> > On Tue, Nov 13, 2018 at 4:36 PM Lin Yuan  wrote:
> >
> > > Dear Community,
> > >
> > > I often see there are "Feature" and "Feature request" labels in Github
> > > issues. May I know the difference? If they are meant to be the same
> > thing,
> > > can we only keep one of them?
> > >
> > > Thanks,
> > >
> > > Lin
> > >
> >
>


Re: [Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Sheng Zha
I was in the middle of transferring all items labeled with "Feature" to the
"Feature request" label when "Feature" label was deleted. I'm not sure who
deleted the "Feature" label but it's gone now.

-sz

On Tue, Nov 13, 2018 at 5:05 PM Anirudh Acharya 
wrote:

> This issue was raised before here -
>
> https://lists.apache.org/thread.html/3e988e6bd82cb2d69ba20c21bf763952ed22a5732e61f6fba1f89ac8@%3Cdev.mxnet.apache.org%3E
>
> We need someone with committer privileges to fix it.
>
>
> Thanks
> Anirudh
>
>
>
> On Tue, Nov 13, 2018 at 4:36 PM Lin Yuan  wrote:
>
> > Dear Community,
> >
> > I often see there are "Feature" and "Feature request" labels in Github
> > issues. May I know the difference? If they are meant to be the same
> thing,
> > can we only keep one of them?
> >
> > Thanks,
> >
> > Lin
> >
>


Re: [Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Acharya, Anirudh
Thanks for doing this.

-
Anirudh

On Nov 13, 2018 5:25 PM, Sheng Zha  wrote:
Oh, I see. I was moving the other 80 or so, so it was probably a
race-condition.
Anyway, thanks for being eager to help.

-sz

On Tue, Nov 13, 2018 at 5:24 PM Naveen Swamy  wrote:

> done now, removed the feature label, there were 4 issues with that label
> but also had Feature Request.
>
> On Tue, Nov 13, 2018 at 5:05 PM Anirudh Acharya 
> wrote:
>
> > This issue was raised before here -
> >
> >
> https://lists.apache.org/thread.html/3e988e6bd82cb2d69ba20c21bf763952ed22a5732e61f6fba1f89ac8@%3Cdev.mxnet.apache.org%3E
> >
> > We need someone with committer privileges to fix it.
> >
> >
> > Thanks
> > Anirudh
> >
> >
> >
> > On Tue, Nov 13, 2018 at 4:36 PM Lin Yuan  wrote:
> >
> > > Dear Community,
> > >
> > > I often see there are "Feature" and "Feature request" labels in Github
> > > issues. May I know the difference? If they are meant to be the same
> > thing,
> > > can we only keep one of them?
> > >
> > > Thanks,
> > >
> > > Lin
> > >
> >
>


Re: [Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Sheng Zha
Oh, I see. I was moving the other 80 or so, so it was probably a
race-condition.
Anyway, thanks for being eager to help.

-sz

On Tue, Nov 13, 2018 at 5:24 PM Naveen Swamy  wrote:

> done now, removed the feature label, there were 4 issues with that label
> but also had Feature Request.
>
> On Tue, Nov 13, 2018 at 5:05 PM Anirudh Acharya 
> wrote:
>
> > This issue was raised before here -
> >
> >
> https://lists.apache.org/thread.html/3e988e6bd82cb2d69ba20c21bf763952ed22a5732e61f6fba1f89ac8@%3Cdev.mxnet.apache.org%3E
> >
> > We need someone with committer privileges to fix it.
> >
> >
> > Thanks
> > Anirudh
> >
> >
> >
> > On Tue, Nov 13, 2018 at 4:36 PM Lin Yuan  wrote:
> >
> > > Dear Community,
> > >
> > > I often see there are "Feature" and "Feature request" labels in Github
> > > issues. May I know the difference? If they are meant to be the same
> > thing,
> > > can we only keep one of them?
> > >
> > > Thanks,
> > >
> > > Lin
> > >
> >
>


Re: [Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Naveen Swamy
done now, removed the feature label, there were 4 issues with that label
but also had Feature Request.

On Tue, Nov 13, 2018 at 5:05 PM Anirudh Acharya 
wrote:

> This issue was raised before here -
>
> https://lists.apache.org/thread.html/3e988e6bd82cb2d69ba20c21bf763952ed22a5732e61f6fba1f89ac8@%3Cdev.mxnet.apache.org%3E
>
> We need someone with committer privileges to fix it.
>
>
> Thanks
> Anirudh
>
>
>
> On Tue, Nov 13, 2018 at 4:36 PM Lin Yuan  wrote:
>
> > Dear Community,
> >
> > I often see there are "Feature" and "Feature request" labels in Github
> > issues. May I know the difference? If they are meant to be the same
> thing,
> > can we only keep one of them?
> >
> > Thanks,
> >
> > Lin
> >
>


Re: [Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Naveen Swamy
there were a few more that had 'Feature' as label but didn't show up in the
filter search, I manually applied the `Feature Request` on them

On Tue, Nov 13, 2018 at 5:12 PM Naveen Swamy  wrote:

> done now, removed the feature label, there were 4 issues with that label
> but also had Feature Request.
>
> On Tue, Nov 13, 2018 at 5:05 PM Anirudh Acharya 
> wrote:
>
>> This issue was raised before here -
>>
>> https://lists.apache.org/thread.html/3e988e6bd82cb2d69ba20c21bf763952ed22a5732e61f6fba1f89ac8@%3Cdev.mxnet.apache.org%3E
>>
>> We need someone with committer privileges to fix it.
>>
>>
>> Thanks
>> Anirudh
>>
>>
>>
>> On Tue, Nov 13, 2018 at 4:36 PM Lin Yuan  wrote:
>>
>> > Dear Community,
>> >
>> > I often see there are "Feature" and "Feature request" labels in Github
>> > issues. May I know the difference? If they are meant to be the same
>> thing,
>> > can we only keep one of them?
>> >
>> > Thanks,
>> >
>> > Lin
>> >
>>
>


Re: [Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Anirudh Acharya
This issue was raised before here -
https://lists.apache.org/thread.html/3e988e6bd82cb2d69ba20c21bf763952ed22a5732e61f6fba1f89ac8@%3Cdev.mxnet.apache.org%3E

We need someone with committer privileges to fix it.


Thanks
Anirudh



On Tue, Nov 13, 2018 at 4:36 PM Lin Yuan  wrote:

> Dear Community,
>
> I often see there are "Feature" and "Feature request" labels in Github
> issues. May I know the difference? If they are meant to be the same thing,
> can we only keep one of them?
>
> Thanks,
>
> Lin
>


[Question] Difference between "Feature" and "Feature request" labels in Github

2018-11-13 Thread Lin Yuan
Dear Community,

I often see there are "Feature" and "Feature request" labels in Github
issues. May I know the difference? If they are meant to be the same thing,
can we only keep one of them?

Thanks,

Lin


Re: Nightly/Weekly tests for examples

2018-11-13 Thread Aaron Markham
Naveen - I agree with you there. Not every example should be moved to
tutorials... just the ones that would be great to have on the site and
to be tested regularly.
I think that having examples of specific APIs will be useful on the website too.

Ankit, there's a cost/benefit analysis for keeping both a command line
example and a notebook. That has maintenance overhead.
A couple of options to debate and apply on a case-by-case basis:
1) Only maintain one or the other.
2) If it's really important, maintain both, but update both codebases
with notes and directions for maintenance, so that if one is found
broken and is getting updated, contributors know that they should
update the other.

Since the CNN Viz example/tutorial is for visualization, I'd lean
towards #1 and keep only the notebook. Then delete the example, but
since it requires a python file from that examples folder as a
utility, move this to a tutorials_utils folder in /docs. That folder
should be organized appropriately and excluded in the docs build
config in conf.py. Look for "exclude_patterns".

My 3 cents... Cheers,
Aaron


On Tue, Nov 13, 2018 at 11:26 AM Khedia, Ankit
 wrote:
>
>
> We can start small consolidating few examples which are there in both 
> tutorials/examples folder.
> One of such example is GradCAM implementation:
> In tutorials - 
> https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/vision/cnn_visualization.md
> In examples- 
> https://github.com/apache/incubator-mxnet/blob/master/example/cnn_visualization
> The only difference is we have a python script in place of notebook in 
> examples folder for this particular example.
> Any thoughts/suggestions?
>
> On the other hand, I agree with Sandeep that there should be some basic 
> testing of examples and we can start with a small set of python examples to 
> begin with.
>
> —Ankit
>
>
> On Nov 13, 2018, at 10:31 AM, Naveen Swamy 
> mailto:mnnav...@gmail.com>> wrote:
>
> Aaron, IMO tutorials have a specific purpose, to introduce concepts and
> APIs to the users and I think converting examples to tutorials would
> overwhelm the users, we should carefully choose which examples we want to
> turn into tutorials.
>
> I agree that today examples are graveyard of untested code, my suggestion
> is to add some testing to the example when you touch the example - at the
> least to check the functionality. These can be run once a week.
>
>
> On Tue, Nov 13, 2018 at 6:52 AM Aaron Markham 
> mailto:aaron.s.mark...@gmail.com>>
> wrote:
>
> I've been actively promoting moving examples to tutorials during reviews.
> That way they fall under the testing umbrella and get added to the website.
>
> Many times there's not really a great distinction as to why something is in
> the examples folder, other than it's like a graveyard of untested sample
> code.
>
> I would suggest a starting strategy of when doing updates on examples, see
> if with just a little more effort, ask yourself, can it be converted to a
> tutorial?
>
> The last thing CI needs is more flaky tutorial tests, so whatever is done
> here should use the more robust approaches that are being discussed.
>
> Cheers,
> Aaron
>
> On Mon, Nov 12, 2018, 16:24 sandeep krishnamurthy <
> sandeep.krishn...@gmail.com wrote:
>
> Thanks, Ankit for bringing this up. @Anirudh - All the concerns you
> raised
> are very valid. Here are my thoughts:
> 1. There were several examples that were crashing or had compiler errors.
> This is a very bad user experience. All example scripts should be at
> least runnable!
> 2. While I agree examples are too diverse (python scripts, notebooks,
> epochs, print statements etc..) We can always start small, we can start
> with 5 examples. We can use this to streamline all examples to be python
> scripts, print statements, with the main function invoker that can take
> params like epoch, dataset etc.
> 3. We can start with running weekly tests to avoid too long nightly test
> pipeline.
> 4. One possible issue can be on a few examples that depend on a large or
> controlled dataset. I am not sure yet, how to solve this, but, we can
> think.
>
> Any suggestions?
> Best,
> Sandeep
>
>
>
> On Mon, Nov 12, 2018 at 10:38 AM Anirudh Acharya 
> mailto:anirudhk...@gmail.com>>
> wrote:
>
> Hi Ankit,
>
> I have a few concerns about testing examples. Before writing tests for
> examples,
>
>   - you will need to first decide what constitutes a test for an
> example,
>   because examples are not API calls, which will have return
> statements
> and
>   the test can just call the API and assert for certain values. Just
> testing
>   if an example is a compilable python script will not add much value
> in
> my
>   opinion.
>   - And testing for example output and results will require a re-write
> of
>   many of the examples, because many of them currently just have print
>   statements as outputs and does not return any value as such. I am
> not
> sure
>   if it is worth the 

Re: [VOTE] Release Apache MXNet (incubating) version 1.3.1.rc0

2018-11-13 Thread Carin Meier
+1 - Clojure package tested fine with Scala jars

On Mon, Nov 12, 2018 at 6:53 PM Anton Chernov  wrote:

> Dear MXNet community,
>
> This is the vote to release Apache MXNet (incubating) version 1.3.1. Voting
> will start now, on Monday the 12th of November 2018 and close on 14:00
> Thursday the 15th of November 2018, Pacific Time (PT).
>
> Link to release notes:
> https://cwiki.apache.org/confluence/x/eZGzBQ
>
> Link to release candidate 1.3.1.rc0:
> https://github.com/apache/incubator-mxnet/releases/tag/1.3.1.rc0
>
> Link to source and signatures on apache dist server:
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.3.1.rc0/
>
> Link to scala packages on the staging repo:
>
> * CPU
>
> https://repository.apache.org/content/repositories/snapshots/org/apache/mxnet/mxnet-full_2.11-osx-x86_64-cpu/1.3.1-SNAPSHOT/
>
> * GPU
>
> https://repository.apache.org/content/repositories/snapshots/org/apache/mxnet/mxnet-full_2.11-linux-x86_64-gpu/1.3.1-SNAPSHOT/
>
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
>
>
> Best regards,
> Anton
>


Re: Nightly/Weekly tests for examples

2018-11-13 Thread Khedia, Ankit

We can start small consolidating few examples which are there in both 
tutorials/examples folder.
One of such example is GradCAM implementation:
In tutorials - 
https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/vision/cnn_visualization.md
In examples- 
https://github.com/apache/incubator-mxnet/blob/master/example/cnn_visualization
The only difference is we have a python script in place of notebook in examples 
folder for this particular example.
Any thoughts/suggestions?

On the other hand, I agree with Sandeep that there should be some basic testing 
of examples and we can start with a small set of python examples to begin with.

—Ankit


On Nov 13, 2018, at 10:31 AM, Naveen Swamy 
mailto:mnnav...@gmail.com>> wrote:

Aaron, IMO tutorials have a specific purpose, to introduce concepts and
APIs to the users and I think converting examples to tutorials would
overwhelm the users, we should carefully choose which examples we want to
turn into tutorials.

I agree that today examples are graveyard of untested code, my suggestion
is to add some testing to the example when you touch the example - at the
least to check the functionality. These can be run once a week.


On Tue, Nov 13, 2018 at 6:52 AM Aaron Markham 
mailto:aaron.s.mark...@gmail.com>>
wrote:

I've been actively promoting moving examples to tutorials during reviews.
That way they fall under the testing umbrella and get added to the website.

Many times there's not really a great distinction as to why something is in
the examples folder, other than it's like a graveyard of untested sample
code.

I would suggest a starting strategy of when doing updates on examples, see
if with just a little more effort, ask yourself, can it be converted to a
tutorial?

The last thing CI needs is more flaky tutorial tests, so whatever is done
here should use the more robust approaches that are being discussed.

Cheers,
Aaron

On Mon, Nov 12, 2018, 16:24 sandeep krishnamurthy <
sandeep.krishn...@gmail.com wrote:

Thanks, Ankit for bringing this up. @Anirudh - All the concerns you
raised
are very valid. Here are my thoughts:
1. There were several examples that were crashing or had compiler errors.
This is a very bad user experience. All example scripts should be at
least runnable!
2. While I agree examples are too diverse (python scripts, notebooks,
epochs, print statements etc..) We can always start small, we can start
with 5 examples. We can use this to streamline all examples to be python
scripts, print statements, with the main function invoker that can take
params like epoch, dataset etc.
3. We can start with running weekly tests to avoid too long nightly test
pipeline.
4. One possible issue can be on a few examples that depend on a large or
controlled dataset. I am not sure yet, how to solve this, but, we can
think.

Any suggestions?
Best,
Sandeep



On Mon, Nov 12, 2018 at 10:38 AM Anirudh Acharya 
mailto:anirudhk...@gmail.com>>
wrote:

Hi Ankit,

I have a few concerns about testing examples. Before writing tests for
examples,

  - you will need to first decide what constitutes a test for an
example,
  because examples are not API calls, which will have return
statements
and
  the test can just call the API and assert for certain values. Just
testing
  if an example is a compilable python script will not add much value
in
my
  opinion.
  - And testing for example output and results will require a re-write
of
  many of the examples, because many of them currently just have print
  statements as outputs and does not return any value as such. I am
not
sure
  if it is worth the dev-effort.
  - the current set of examples in the mxnet repo are very diverse -
some
  are written as python notebooks, some are just python scripts with
paper
  implementations, and some are just illustrations of certain mxnet
features.
  I am curious to know how you will write tests for these things.


Looking forward to seeing the design of this test bed/framework.


Thanks
Anirudh Acharya

On Fri, Nov 9, 2018 at 2:39 PM Marco de Abreu
mailto:marco.g.ab...@googlemail.com.invalid>>
 wrote:

Hello Ankit,

that's a great idea! Using the tutorial tests as reference is a great
starting point. If you are interested, please don't hesitate to
attend
the
Berlin user group in case you would like to discuss your first
thoughts
in-person before drafting a design.

-Marco


Am Fr., 9. Nov. 2018, 23:23 hat 
khedia.an...@gmail.com <
khedia.an...@gmail.com> geschrieben:

Hi MXNet community,

Recently, I and a few other contributors focussed on fixing
examples
in
our repository which were not working out of the box as expected.
https://github.com/apache/incubator-mxnet/issues/12800
https://github.com/apache/incubator-mxnet/issues/11895
https://github.com/apache/incubator-mxnet/pull/13196

Some of the examples failed after API changes and remained uncaught
until
a user reported the issue. 

Re: MXNet - Gluon - Audio

2018-11-13 Thread Lai Wei
Hi Gaurav,

Thanks for starting this. I see the PR is out
, left some initial
reviews, good work!

In addition to Sandeep's queries, I have the following:
1. Can we include some simple classic audio dataset for users to directly
import and try out? like MNIST in vision. (e.g.:
http://pytorch.org/audio/datasets.html#yesno)
2. Librosa provides some good audio feature extractions, we can use it for
now. But it's slow as you have to do conversions between ndarray and numpy.
In the long term, can we make transforms to use mxnet operators and change
your transforms to hybrid blocks? For example, mxnet FFT

operator
can be used in a hybrid block transformer, which will be a lot faster.

Some additional references on users already using mxnet on audio, we should
aim to make it easier and automate the file load/preprocess/transform
process.
1. https://github.com/chen0040/mxnet-audio
2. https://github.com/shuokay/mxnet-wavenet

Looking forward to seeing this feature out.
Thanks!

Best Regards

Lai


On Tue, Nov 13, 2018 at 9:09 AM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> Thanks, Gaurav for starting this initiative. The design document is
> detailed and gives all the information.
> Starting to add this in "Contrib" is a good idea while we expect a few
> rough edges and cleanups to follow.
>
> I had the following queries:
> 1. Is there any analysis comparing LibROSA with other libraries? w.r.t
> features, performance, community usage in audio data domain.
> 2. What is the recommendation of LibROSA dependency? Part of MXNet PyPi or
> ask the user to install if required? I prefer the latter, similar to
> protobuf in ONNX-MXNet.
> 3. I see LibROSA is a fully Python-based library. Are we getting blocked on
> the dependency for future use cases when we want to make transformations as
> operators and allow for cross-language support?
> 4. In performance design considerations, with lazy=True / False the
> performance difference is too scary ( 8 minutes to 4 hours!!) This requires
> some more analysis. If we known turning a flag off/on has 24X performance
> degradation, should we need to provide that control to user? What is the
> impact of this on Memory usage?
> 5. I see LibROSA has ISC license (
> https://github.com/librosa/librosa/blob/master/LICENSE.md) which says free
> to use with same license notification. I am not sure if this is ok. I
> request other committers/mentors to suggest.
>
> Best,
> Sandeep
>
> On Fri, Nov 9, 2018 at 5:45 PM Gaurav Gireesh 
> wrote:
>
> > Dear MXNet Community,
> >
> > I recently started looking into performing some simple sound multi-class
> > classification tasks with Audio Data and realized that as a user, I would
> > like MXNet to have an out of the box feature which allows us to load
> audio
> > data(at least 1 file format), extract features( or apply some common
> > transforms/feature extraction) and train a model using the Audio Dataset.
> > This could be a first step towards building and supporting APIs similar
> to
> > what we have for "vision" related use cases in MXNet.
> >
> > Below is the design proposal :
> >
> > Gluon - Audio Design Proposal
> > 
> >
> > I would highly appreciate your taking time to review and provide
> feedback,
> > comments/suggestions on this.
> > Looking forward to your support.
> >
> >
> > Best Regards,
> >
> > Gaurav Gireesh
> >
>
>
> --
> Sandeep Krishnamurthy
>


Re: Nightly/Weekly tests for examples

2018-11-13 Thread Naveen Swamy
Aaron, IMO tutorials have a specific purpose, to introduce concepts and
APIs to the users and I think converting examples to tutorials would
overwhelm the users, we should carefully choose which examples we want to
turn into tutorials.

I agree that today examples are graveyard of untested code, my suggestion
is to add some testing to the example when you touch the example - at the
least to check the functionality. These can be run once a week.


On Tue, Nov 13, 2018 at 6:52 AM Aaron Markham 
wrote:

> I've been actively promoting moving examples to tutorials during reviews.
> That way they fall under the testing umbrella and get added to the website.
>
> Many times there's not really a great distinction as to why something is in
> the examples folder, other than it's like a graveyard of untested sample
> code.
>
> I would suggest a starting strategy of when doing updates on examples, see
> if with just a little more effort, ask yourself, can it be converted to a
> tutorial?
>
> The last thing CI needs is more flaky tutorial tests, so whatever is done
> here should use the more robust approaches that are being discussed.
>
> Cheers,
> Aaron
>
> On Mon, Nov 12, 2018, 16:24 sandeep krishnamurthy <
> sandeep.krishn...@gmail.com wrote:
>
> > Thanks, Ankit for bringing this up. @Anirudh - All the concerns you
> raised
> > are very valid. Here are my thoughts:
> > 1. There were several examples that were crashing or had compiler errors.
> > This is a very bad user experience. All example scripts should be at
> > least runnable!
> > 2. While I agree examples are too diverse (python scripts, notebooks,
> > epochs, print statements etc..) We can always start small, we can start
> > with 5 examples. We can use this to streamline all examples to be python
> > scripts, print statements, with the main function invoker that can take
> > params like epoch, dataset etc.
> > 3. We can start with running weekly tests to avoid too long nightly test
> > pipeline.
> > 4. One possible issue can be on a few examples that depend on a large or
> > controlled dataset. I am not sure yet, how to solve this, but, we can
> > think.
> >
> > Any suggestions?
> > Best,
> > Sandeep
> >
> >
> >
> > On Mon, Nov 12, 2018 at 10:38 AM Anirudh Acharya 
> > wrote:
> >
> > > Hi Ankit,
> > >
> > > I have a few concerns about testing examples. Before writing tests for
> > > examples,
> > >
> > >- you will need to first decide what constitutes a test for an
> > example,
> > >because examples are not API calls, which will have return
> statements
> > > and
> > >the test can just call the API and assert for certain values. Just
> > > testing
> > >if an example is a compilable python script will not add much value
> in
> > > my
> > >opinion.
> > >- And testing for example output and results will require a re-write
> > of
> > >many of the examples, because many of them currently just have print
> > >statements as outputs and does not return any value as such. I am
> not
> > > sure
> > >if it is worth the dev-effort.
> > >- the current set of examples in the mxnet repo are very diverse -
> > some
> > >are written as python notebooks, some are just python scripts with
> > paper
> > >implementations, and some are just illustrations of certain mxnet
> > > features.
> > >I am curious to know how you will write tests for these things.
> > >
> > >
> > > Looking forward to seeing the design of this test bed/framework.
> > >
> > >
> > > Thanks
> > > Anirudh Acharya
> > >
> > > On Fri, Nov 9, 2018 at 2:39 PM Marco de Abreu
> > >  wrote:
> > >
> > > > Hello Ankit,
> > > >
> > > > that's a great idea! Using the tutorial tests as reference is a great
> > > > starting point. If you are interested, please don't hesitate to
> attend
> > > the
> > > > Berlin user group in case you would like to discuss your first
> thoughts
> > > > in-person before drafting a design.
> > > >
> > > > -Marco
> > > >
> > > >
> > > > Am Fr., 9. Nov. 2018, 23:23 hat khedia.an...@gmail.com <
> > > > khedia.an...@gmail.com> geschrieben:
> > > >
> > > > > Hi MXNet community,
> > > > >
> > > > > Recently, I and a few other contributors focussed on fixing
> examples
> > in
> > > > > our repository which were not working out of the box as expected.
> > > > > https://github.com/apache/incubator-mxnet/issues/12800
> > > > > https://github.com/apache/incubator-mxnet/issues/11895
> > > > > https://github.com/apache/incubator-mxnet/pull/13196
> > > > >
> > > > > Some of the examples failed after API changes and remained uncaught
> > > until
> > > > > a user reported the issue. While the community is actively working
> on
> > > > > fixing it, it might re-occur after few days if we don’t have a
> proper
> > > > > mechanism to catch regressions.
> > > > >
> > > > > So, I would like to propose to enable nightly/weekly tests for the
> > > > > examples similar to what we have for tutorials to catch any such
> > > > > regressions. The test could check only 

Re: MXNet - Gluon - Audio

2018-11-13 Thread sandeep krishnamurthy
Thanks, Gaurav for starting this initiative. The design document is
detailed and gives all the information.
Starting to add this in "Contrib" is a good idea while we expect a few
rough edges and cleanups to follow.

I had the following queries:
1. Is there any analysis comparing LibROSA with other libraries? w.r.t
features, performance, community usage in audio data domain.
2. What is the recommendation of LibROSA dependency? Part of MXNet PyPi or
ask the user to install if required? I prefer the latter, similar to
protobuf in ONNX-MXNet.
3. I see LibROSA is a fully Python-based library. Are we getting blocked on
the dependency for future use cases when we want to make transformations as
operators and allow for cross-language support?
4. In performance design considerations, with lazy=True / False the
performance difference is too scary ( 8 minutes to 4 hours!!) This requires
some more analysis. If we known turning a flag off/on has 24X performance
degradation, should we need to provide that control to user? What is the
impact of this on Memory usage?
5. I see LibROSA has ISC license (
https://github.com/librosa/librosa/blob/master/LICENSE.md) which says free
to use with same license notification. I am not sure if this is ok. I
request other committers/mentors to suggest.

Best,
Sandeep

On Fri, Nov 9, 2018 at 5:45 PM Gaurav Gireesh 
wrote:

> Dear MXNet Community,
>
> I recently started looking into performing some simple sound multi-class
> classification tasks with Audio Data and realized that as a user, I would
> like MXNet to have an out of the box feature which allows us to load audio
> data(at least 1 file format), extract features( or apply some common
> transforms/feature extraction) and train a model using the Audio Dataset.
> This could be a first step towards building and supporting APIs similar to
> what we have for "vision" related use cases in MXNet.
>
> Below is the design proposal :
>
> Gluon - Audio Design Proposal
> 
>
> I would highly appreciate your taking time to review and provide feedback,
> comments/suggestions on this.
> Looking forward to your support.
>
>
> Best Regards,
>
> Gaurav Gireesh
>


-- 
Sandeep Krishnamurthy


Re: Nightly/Weekly tests for examples

2018-11-13 Thread Aaron Markham
I've been actively promoting moving examples to tutorials during reviews.
That way they fall under the testing umbrella and get added to the website.

Many times there's not really a great distinction as to why something is in
the examples folder, other than it's like a graveyard of untested sample
code.

I would suggest a starting strategy of when doing updates on examples, see
if with just a little more effort, ask yourself, can it be converted to a
tutorial?

The last thing CI needs is more flaky tutorial tests, so whatever is done
here should use the more robust approaches that are being discussed.

Cheers,
Aaron

On Mon, Nov 12, 2018, 16:24 sandeep krishnamurthy <
sandeep.krishn...@gmail.com wrote:

> Thanks, Ankit for bringing this up. @Anirudh - All the concerns you raised
> are very valid. Here are my thoughts:
> 1. There were several examples that were crashing or had compiler errors.
> This is a very bad user experience. All example scripts should be at
> least runnable!
> 2. While I agree examples are too diverse (python scripts, notebooks,
> epochs, print statements etc..) We can always start small, we can start
> with 5 examples. We can use this to streamline all examples to be python
> scripts, print statements, with the main function invoker that can take
> params like epoch, dataset etc.
> 3. We can start with running weekly tests to avoid too long nightly test
> pipeline.
> 4. One possible issue can be on a few examples that depend on a large or
> controlled dataset. I am not sure yet, how to solve this, but, we can
> think.
>
> Any suggestions?
> Best,
> Sandeep
>
>
>
> On Mon, Nov 12, 2018 at 10:38 AM Anirudh Acharya 
> wrote:
>
> > Hi Ankit,
> >
> > I have a few concerns about testing examples. Before writing tests for
> > examples,
> >
> >- you will need to first decide what constitutes a test for an
> example,
> >because examples are not API calls, which will have return statements
> > and
> >the test can just call the API and assert for certain values. Just
> > testing
> >if an example is a compilable python script will not add much value in
> > my
> >opinion.
> >- And testing for example output and results will require a re-write
> of
> >many of the examples, because many of them currently just have print
> >statements as outputs and does not return any value as such. I am not
> > sure
> >if it is worth the dev-effort.
> >- the current set of examples in the mxnet repo are very diverse -
> some
> >are written as python notebooks, some are just python scripts with
> paper
> >implementations, and some are just illustrations of certain mxnet
> > features.
> >I am curious to know how you will write tests for these things.
> >
> >
> > Looking forward to seeing the design of this test bed/framework.
> >
> >
> > Thanks
> > Anirudh Acharya
> >
> > On Fri, Nov 9, 2018 at 2:39 PM Marco de Abreu
> >  wrote:
> >
> > > Hello Ankit,
> > >
> > > that's a great idea! Using the tutorial tests as reference is a great
> > > starting point. If you are interested, please don't hesitate to attend
> > the
> > > Berlin user group in case you would like to discuss your first thoughts
> > > in-person before drafting a design.
> > >
> > > -Marco
> > >
> > >
> > > Am Fr., 9. Nov. 2018, 23:23 hat khedia.an...@gmail.com <
> > > khedia.an...@gmail.com> geschrieben:
> > >
> > > > Hi MXNet community,
> > > >
> > > > Recently, I and a few other contributors focussed on fixing examples
> in
> > > > our repository which were not working out of the box as expected.
> > > > https://github.com/apache/incubator-mxnet/issues/12800
> > > > https://github.com/apache/incubator-mxnet/issues/11895
> > > > https://github.com/apache/incubator-mxnet/pull/13196
> > > >
> > > > Some of the examples failed after API changes and remained uncaught
> > until
> > > > a user reported the issue. While the community is actively working on
> > > > fixing it, it might re-occur after few days if we don’t have a proper
> > > > mechanism to catch regressions.
> > > >
> > > > So, I would like to propose to enable nightly/weekly tests for the
> > > > examples similar to what we have for tutorials to catch any such
> > > > regressions. The test could check only basic functionalities/working
> of
> > > the
> > > > examples. It can run small examples completely whereas it can run
> long
> > > > training examples for only few epochs.
> > > >
> > > > Any thoughts from the community? Any other suggestions for fixing the
> > > same?
> > > >
> > > > Regards,
> > > > Ankit Khedia
> > > >
> > >
> >
>
>
> --
> Sandeep Krishnamurthy
>