Re: [Discussion] Remove bundled llvm OpenMP

2019-05-20 Thread Pedro Larroy
Hi Anton, Stas.

Can we reopen this PR and get it merged as per the data collected by Stas?

https://github.com/apache/incubator-mxnet/pull/12160

https://cwiki.apache.org/confluence/display/MXNET/Benchmarking+MXNet+with+different+OpenMP+implementations

There are multiple issues that will be fixed by solving this problem.


Pedro

On Tue, Feb 12, 2019 at 4:54 AM Anton Chernov  wrote:
>
> I would like to propose a possible alternative solution for consideration.
>
> If keeping llvm OpenMP as a submodule is inevitable one could make
> following adjustments:
>
> Since compilers try to find their own OpenMP library implicitly, MXNet
> needs to ensure that only the bundled version is found. Therefore during
> the build and also during deployment this library has to provide symlinks
> for each possible compiler that would link to the built artifact ie.
>
> libiomp.so -> libgomp.so -> libomp.so
>
> The MKLML iomp would need to be hidden and removed as well.
>
> On Windows it would be a different story, but as can be seen [1] bundled
> OpenMP was not included in the Windows build anyway.
>
> Alternatively: always use iomp (with same symlinking trick though) provided
> by MKLML distribution [2]. This potentially could work on Windows as well.
>
> Best
> Anton
>
> [1]
> https://github.com/apache/incubator-mxnet/blob/8a63bdecf2d9f12d34fe5874957ae4c867eb5f5b/CMakeLists.txt#L408-L410
> [2] https://github.com/intel/mkl-dnn/releases
>
> вт, 12 февр. 2019 г. в 11:22, Anton Chernov :
>
> > Recent benchmarking results have been published here [1]. Experiments
> > compare different OpenMP implementations as well as binaries compiled with
> > different compilers including GCC, Clang and ICC.
> >
> > During experimentation another issues with mixing up libraries was
> > identified and described here [2].
> >
> > Best
> > Anton
> >
> > [1] https://cwiki.apache.org/confluence/x/2wclBg
> > [2]
> > https://github.com/apache/incubator-mxnet/issues/14087#issuecomment-461734041
> >
> >
> > вс, 9 дек. 2018 г. в 16:28, Anton Chernov :
> >
> >> Hi Chris,
> >>
> >> Following up on the issue, are all things resolved in the discussion?
> >>
> >> If yes, I kindly ask you to reopen this PR and remove ‘requesting
> >> changes’ status:
> >> https://github.com/apache/incubator-mxnet/pull/12160
> >>
> >> Thank you.
> >>
> >>
> >> Best
> >> Anton
> >>
> >>
> >> вт, 27 нояб. 2018 г. в 17:15, Anton Chernov :
> >>
> >>> Another thing to take into consideration:
> >>>
> >>> All python artefacts that are created (PyPi) are built with make and are
> >>> not using the bundled OpenMP library.
> >>>
> >>> One step for the switch to CMake to happen is the approval and merging
> >>> of the mentioned PR:
> >>>
> >>> https://github.com/apache/incubator-mxnet/pull/12160
> >>>
> >>> If there are no other objections I kindly ask Chris Olivier to remove
> >>> his 'requesting changes' veto on it to unblock the CMake overhaul work.
> >>>
> >>> Thank you.
> >>>
> >>> Best
> >>> Anton
> >>>
> >>> чт, 22 нояб. 2018 г. в 17:11, Anton Chernov :
> >>>
> 
>  Thank you for you answer, Chris.
> 
>  > The whole “mixing omp libraries” is something that occurs in
>  production
>  every day and certainly in everything that uses mkl.
> 
>  I'm afraid this statement is wrong. Intel MKL-DNN strictly ensures that
>  this mixture is not happening:
> 
>  "Intel MKL-DNN uses OpenMP* for parallelism and requires an OpenMP
>  runtime library to work. As different OpenMP runtimes may not be binary
>  compatible it's important to ensure that only one OpenMP runtime is used
>  throughout the application. Having more than one OpenMP runtime 
>  initialized
>  may lead to undefined behavior resulting in incorrect results or 
>  crashes."
>  [1]
> 
>  That is why 2 different MKLML libraries are provided:
> 
>  lib/libmklml_gnu.so  | Intel MKL small library for GNU* OpenMP runtime
>  lib/libmklml_intel.so | Intel MKL small library for Intel(R) OpenMP
>  runtime
> 
>  > is the suggestion that libiomp be removed from mkl?
> 
>  That is certainly not my suggestion.
> 
>  > have you spoken with intel? have you consulted Intel at all?
> 
>  Yes, I have asked for comments on the issue.
> 
>  > “hard to debug random crash”. you’re seeing an assertion which is
>  probably ...
> 
>  I'm seeing the result of undefined behaviour. And I want to put
>  emphasis on the following statement:
> 
>  I disregards of whether there is a particular reason for the assert -
>  it is a result of behaviour that should not happen. There are valid ways
>  how to use llvm OpenMP in MXNet and the current way is not one of them.
> 
>  > The lack of root-causing the problem and knee-jerk solution here
>  makes me
>  uncomfortable.
> 
>  I hope that my efforts highlighting the problems reach you to mitigate
>  your uncomfort.
> 

Re: [RFC] Support for creation of Large Tensors in MXNet

2019-05-18 Thread Sheng Zha
Thanks for clarifying. This seems like a duplicate of [1] (though there wasn't 
any feedback there). I think everyone already agrees on the goal. 

> Currently, we assume the max size of each dimension.

I agree with Tao that int64_t would be necessary given that it's common to 
flatten and reshape ndarrays.

To help avoid repeating discussion and to make this discussion more productive, 
here are some of the relevant context that I'm aware of:
- The first part of the proposed change was merged in #11742 which caused 
#14496, i.e. performance degredation in transpose and imdecode. The full scope 
is still unclear.
- A compilation flag was added in #14570 so that people can explicitly opt in 
for the support without impacting others using the default setting.

Given the context, since the goal is to support large tensor by default without 
performance impact, I hope more investigation could accompany this proposal 
that covers:
- The problem: list the parts (e.g. operators) whose performance is impacted by 
changing the index type, and the amount of slow-down.
- The solution for addressing the slow-down.

Thanks.

-sz

[1] 
https://lists.apache.org/thread.html/52b784cf85f89a22355e195fc88b01992fb1993a6f08499a46fa1ff8@%3Cdev.mxnet.apache.org%3E

On 2019/05/19 02:43:39, "Srivastava, Rohit Kumar" 
 wrote: 
> Hi Tao,
> Existing MXNet implementation doesn't support large tensors. MXNet 
> NDArray creation for tensors of sizes larger than 2^32 is only supported by 
> enabling a build flag for now. The purpose of this thread is to have the 
> community provide feedback on the design cwiki for *Large Tensor Support* in 
> MXNet. The intension is to make large tensor support as default feature in 
> MXNet (in future) w/o any performance impact so consumers do not have to 
> build it from source. 
> 
> -Rohit
> 
> On 5/18/19, 5:59 PM, "Lv, Tao A"  wrote:
> 
> Hi Rohit,
> 
> The existing MKL-DNN and its integration in MXNet should already support 
> *large tensor* which means the total number of elements (Prod(shape)) can 
> exceed INT_MAX. Feel free to me know if you find any issue when using MKL-DNN 
> operators with large tensors.
> 
> For large dimension size (shape[x]), MKL-DNN is going to support in its 
> 1.0 release and will be released at the middle of year. But I'm not sure if 
> MXNet has plan to support that.
> 
> Thanks,
> -tao
> 
> -Original Message-
> From: Srivastava, Rohit Kumar [mailto:srivastava@buckeyemail.osu.edu] 
> Sent: Sunday, May 19, 2019 7:23 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [RFC] Support for creation of Large Tensors in MXNet
> 
> Hi Tao,
> There are already couple of operators implemented in MXNet that are 
> currently supporting Tensors with size over ~4.5 billion. In the meantime 
> core MXNet can move ahead with providing initial support for such large 
> tensors so MXNet customers can start using it.
> 
> Good to hear MKLDNN will provide support for such cases. Do you have a 
> timeline as to when this feature will be released ?
> 
> -Rohit
> 
> On 4/29/19, 7:18 PM, "Lv, Tao A"  wrote:
> 
> Thank you Lin! I would expect the current MKL-DNN implementation 
> already supports the scenario you mentioned here. Can be verified by this 
> issue: https://github.com/apache/incubator-mxnet/issues/13451
> 
> But as I said before, since we support flatten or reshape operators, 
> so it's possible for users to convert a tensor with large element size to a 
> tensor with large dimension size. It possibly will cause issue there.
> 
> To cover more cases, MKL-DNN is going to support INT64 dimension size 
> in its coming 1.0 major release.
> 
> -tao
>     
> -Original Message-
> From: Lin Yuan [mailto:apefor...@gmail.com] 
> Sent: Tuesday, April 30, 2019 12:56 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [RFC] Support for creation of Large Tensors in MXNet
> 
> Tao,
> 
> - what's the max size of dimensionality? Which data type is used to 
> define dimensionality (ndims)?
> We assume the max size of dimensionality is relatively small. Hence 
> `int` data type is used to define ndim
> 
> - what's the max size of each dimension? Which data type is used to 
> define dimension size (shape[x])?
> Currently, we assume the max size of each dimension is not going to 
> exceed
> 2^31 in real applications. Hence the data type is `int32_t`
> 
> - what's the max size of total elements? Which data type 

Re: [RFC] Support for creation of Large Tensors in MXNet

2019-05-18 Thread Srivastava, Rohit Kumar
Hi Tao,
Existing MXNet implementation doesn't support large tensors. MXNet NDArray 
creation for tensors of sizes larger than 2^32 is only supported by enabling a 
build flag for now. The purpose of this thread is to have the community provide 
feedback on the design cwiki for *Large Tensor Support* in MXNet. The intension 
is to make large tensor support as default feature in MXNet (in future) w/o any 
performance impact so consumers do not have to build it from source. 

-Rohit

On 5/18/19, 5:59 PM, "Lv, Tao A"  wrote:

Hi Rohit,

The existing MKL-DNN and its integration in MXNet should already support 
*large tensor* which means the total number of elements (Prod(shape)) can 
exceed INT_MAX. Feel free to me know if you find any issue when using MKL-DNN 
operators with large tensors.

For large dimension size (shape[x]), MKL-DNN is going to support in its 1.0 
release and will be released at the middle of year. But I'm not sure if MXNet 
has plan to support that.

Thanks,
-tao

-Original Message-
From: Srivastava, Rohit Kumar [mailto:srivastava@buckeyemail.osu.edu] 
Sent: Sunday, May 19, 2019 7:23 AM
To: dev@mxnet.incubator.apache.org
    Subject: Re: [RFC] Support for creation of Large Tensors in MXNet

Hi Tao,
There are already couple of operators implemented in MXNet that are 
currently supporting Tensors with size over ~4.5 billion. In the meantime core 
MXNet can move ahead with providing initial support for such large tensors so 
MXNet customers can start using it.

Good to hear MKLDNN will provide support for such cases. Do you have a 
timeline as to when this feature will be released ?

-Rohit

On 4/29/19, 7:18 PM, "Lv, Tao A"  wrote:

Thank you Lin! I would expect the current MKL-DNN implementation 
already supports the scenario you mentioned here. Can be verified by this 
issue: https://github.com/apache/incubator-mxnet/issues/13451

But as I said before, since we support flatten or reshape operators, so 
it's possible for users to convert a tensor with large element size to a tensor 
with large dimension size. It possibly will cause issue there.

To cover more cases, MKL-DNN is going to support INT64 dimension size 
in its coming 1.0 major release.

-tao

-Original Message-
From: Lin Yuan [mailto:apefor...@gmail.com] 
Sent: Tuesday, April 30, 2019 12:56 AM
To: dev@mxnet.incubator.apache.org
Subject: Re: [RFC] Support for creation of Large Tensors in MXNet

Tao,

- what's the max size of dimensionality? Which data type is used to 
define dimensionality (ndims)?
We assume the max size of dimensionality is relatively small. Hence 
`int` data type is used to define ndim

- what's the max size of each dimension? Which data type is used to 
define dimension size (shape[x])?
Currently, we assume the max size of each dimension is not going to 
exceed
2^31 in real applications. Hence the data type is `int32_t`

- what's the max size of total elements? Which data type is used to 
define element size (Prod(shape))?
We assume the total number of elements in a tensor can be larger than 
2^32 in some applications such as deep graph library. We use the data type 
`int64_t` to represent the total element size. Currently due to performance 
regression in some operators (such as transpose), we used a compiler flag to 
set this data type to `int32_t` by default. Once we have ways to mitigate the 
performance regression, we will set the default data type to `int64_t`, which 
is part of the effort in this project that Rohit proposed.

What is the plan in MKLDNN to support large tensors? We may want to 
coordinate the progress since many operators are using MKLDNN implementation in 
CPU now.

Many Thanks,

Lin

On Sun, Apr 28, 2019 at 7:52 PM Lv, Tao A  wrote:

> Thank you for bringing this topic to dev, Rohit.
>
> Regarding large tensor, can you articulate:
> - what's the max size of dimensionality? Which data type is used to 
> define dimensionality (ndims)?
> - what's the max size of each dimension? Which data type is used to 
> define dimension size (shape[x])?
> - what's the max size of total elements? Which data type is used to 
> define element size (Prod(shape))?
>
> For me, any of these three can be *large*.
>
> -Original Message-
> From: Srivastava, Rohit Kumar 
> [mailto:srivastava@buckeyemail.osu.edu]
> Sent: Saturday, April 27, 2019 7:33 AM
> To: dev@mxnet.incubator.apache.org

RE: [RFC] Support for creation of Large Tensors in MXNet

2019-05-18 Thread Lv, Tao A
Hi Rohit,

The existing MKL-DNN and its integration in MXNet should already support *large 
tensor* which means the total number of elements (Prod(shape)) can exceed 
INT_MAX. Feel free to me know if you find any issue when using MKL-DNN 
operators with large tensors.

For large dimension size (shape[x]), MKL-DNN is going to support in its 1.0 
release and will be released at the middle of year. But I'm not sure if MXNet 
has plan to support that.

Thanks,
-tao

-Original Message-
From: Srivastava, Rohit Kumar [mailto:srivastava@buckeyemail.osu.edu] 
Sent: Sunday, May 19, 2019 7:23 AM
To: dev@mxnet.incubator.apache.org
Subject: Re: [RFC] Support for creation of Large Tensors in MXNet

Hi Tao,
There are already couple of operators implemented in MXNet that are 
currently supporting Tensors with size over ~4.5 billion. In the meantime core 
MXNet can move ahead with providing initial support for such large tensors so 
MXNet customers can start using it.

Good to hear MKLDNN will provide support for such cases. Do you have a timeline 
as to when this feature will be released ?

-Rohit

On 4/29/19, 7:18 PM, "Lv, Tao A"  wrote:

Thank you Lin! I would expect the current MKL-DNN implementation already 
supports the scenario you mentioned here. Can be verified by this issue: 
https://github.com/apache/incubator-mxnet/issues/13451

But as I said before, since we support flatten or reshape operators, so 
it's possible for users to convert a tensor with large element size to a tensor 
with large dimension size. It possibly will cause issue there.

To cover more cases, MKL-DNN is going to support INT64 dimension size in 
its coming 1.0 major release.

-tao

-Original Message-
From: Lin Yuan [mailto:apefor...@gmail.com] 
Sent: Tuesday, April 30, 2019 12:56 AM
To: dev@mxnet.incubator.apache.org
    Subject: Re: [RFC] Support for creation of Large Tensors in MXNet

Tao,

- what's the max size of dimensionality? Which data type is used to define 
dimensionality (ndims)?
We assume the max size of dimensionality is relatively small. Hence `int` 
data type is used to define ndim

- what's the max size of each dimension? Which data type is used to define 
dimension size (shape[x])?
Currently, we assume the max size of each dimension is not going to exceed
2^31 in real applications. Hence the data type is `int32_t`

- what's the max size of total elements? Which data type is used to define 
element size (Prod(shape))?
We assume the total number of elements in a tensor can be larger than 2^32 
in some applications such as deep graph library. We use the data type `int64_t` 
to represent the total element size. Currently due to performance regression in 
some operators (such as transpose), we used a compiler flag to set this data 
type to `int32_t` by default. Once we have ways to mitigate the performance 
regression, we will set the default data type to `int64_t`, which is part of 
the effort in this project that Rohit proposed.

What is the plan in MKLDNN to support large tensors? We may want to 
coordinate the progress since many operators are using MKLDNN implementation in 
CPU now.

Many Thanks,

Lin

On Sun, Apr 28, 2019 at 7:52 PM Lv, Tao A  wrote:

> Thank you for bringing this topic to dev, Rohit.
>
> Regarding large tensor, can you articulate:
> - what's the max size of dimensionality? Which data type is used to 
> define dimensionality (ndims)?
> - what's the max size of each dimension? Which data type is used to 
> define dimension size (shape[x])?
> - what's the max size of total elements? Which data type is used to 
> define element size (Prod(shape))?
>
> For me, any of these three can be *large*.
>
> -Original Message-
> From: Srivastava, Rohit Kumar 
> [mailto:srivastava@buckeyemail.osu.edu]
> Sent: Saturday, April 27, 2019 7:33 AM
> To: dev@mxnet.incubator.apache.org
> Subject: [RFC] Support for creation of Large Tensors in MXNet
>
> Dear Community,
>
> Currently MXNet supports creation of Tensors containing up to 2^32 
> elements. However there are cases where tensors of size over 5 billion 
> is required
>
> We plan to support creation of large tensors on MXNet. A design 
> proposal is ready for review:
> https://cwiki.apache.org/confluence/display/MXNET/Large+Tensor+Support
>
> We will appreciate any help and feedbacks from the community.
>
> Thank you!
>
> Rohit
>




Re: [RFC] Support for creation of Large Tensors in MXNet

2019-05-18 Thread Srivastava, Rohit Kumar
Hi Tao,
There are already couple of operators implemented in MXNet that are 
currently supporting Tensors with size over ~4.5 billion. In the meantime core 
MXNet can move ahead with providing initial support for such large tensors so 
MXNet customers can start using it.

Good to hear MKLDNN will provide support for such cases. Do you have a timeline 
as to when this feature will be released ?

-Rohit

On 4/29/19, 7:18 PM, "Lv, Tao A"  wrote:

Thank you Lin! I would expect the current MKL-DNN implementation already 
supports the scenario you mentioned here. Can be verified by this issue: 
https://github.com/apache/incubator-mxnet/issues/13451

But as I said before, since we support flatten or reshape operators, so 
it's possible for users to convert a tensor with large element size to a tensor 
with large dimension size. It possibly will cause issue there.

To cover more cases, MKL-DNN is going to support INT64 dimension size in 
its coming 1.0 major release.

-tao

-Original Message-
From: Lin Yuan [mailto:apefor...@gmail.com] 
Sent: Tuesday, April 30, 2019 12:56 AM
To: dev@mxnet.incubator.apache.org
    Subject: Re: [RFC] Support for creation of Large Tensors in MXNet

Tao,

- what's the max size of dimensionality? Which data type is used to define 
dimensionality (ndims)?
We assume the max size of dimensionality is relatively small. Hence `int` 
data type is used to define ndim

- what's the max size of each dimension? Which data type is used to define 
dimension size (shape[x])?
Currently, we assume the max size of each dimension is not going to exceed
2^31 in real applications. Hence the data type is `int32_t`

- what's the max size of total elements? Which data type is used to define 
element size (Prod(shape))?
We assume the total number of elements in a tensor can be larger than 2^32 
in some applications such as deep graph library. We use the data type `int64_t` 
to represent the total element size. Currently due to performance regression in 
some operators (such as transpose), we used a compiler flag to set this data 
type to `int32_t` by default. Once we have ways to mitigate the performance 
regression, we will set the default data type to `int64_t`, which is part of 
the effort in this project that Rohit proposed.

What is the plan in MKLDNN to support large tensors? We may want to 
coordinate the progress since many operators are using MKLDNN implementation in 
CPU now.

Many Thanks,

Lin

On Sun, Apr 28, 2019 at 7:52 PM Lv, Tao A  wrote:

> Thank you for bringing this topic to dev, Rohit.
>
> Regarding large tensor, can you articulate:
> - what's the max size of dimensionality? Which data type is used to 
> define dimensionality (ndims)?
> - what's the max size of each dimension? Which data type is used to 
> define dimension size (shape[x])?
> - what's the max size of total elements? Which data type is used to 
> define element size (Prod(shape))?
>
> For me, any of these three can be *large*.
>
> -Original Message-
> From: Srivastava, Rohit Kumar 
> [mailto:srivastava@buckeyemail.osu.edu]
> Sent: Saturday, April 27, 2019 7:33 AM
> To: dev@mxnet.incubator.apache.org
> Subject: [RFC] Support for creation of Large Tensors in MXNet
>
> Dear Community,
>
> Currently MXNet supports creation of Tensors containing up to 2^32 
> elements. However there are cases where tensors of size over 5 billion 
> is required
>
> We plan to support creation of large tensors on MXNet. A design 
> proposal is ready for review:
> https://cwiki.apache.org/confluence/display/MXNET/Large+Tensor+Support
>
> We will appreciate any help and feedbacks from the community.
>
> Thank you!
>
> Rohit
>




Re: [Proposal] New operator graph for MXNet

2019-05-17 Thread Pedro Larroy
Hi Tianqi and Junru.

MXNet as a piece of software  is in its teens and needs to mature. The
community needs to have a honest discussion, and decide if MXNet is a
production or a research framework.

If it's a production framework, we need to apply the YAGNI principle
and decide what is and what is not supported, are we focusing on
training or inference. In any case it should be possible to refactor
the code to be solid, easy to maintain, and resilient to bugs. This
includes reducing the surface area for present an future bugs, saying
no to features, and taking advantage of every tool including the C++
type system, as ML makes further inroads into products and our
everyday life it should be held to the same engineering principles as
other pieces of production software, otherwise you end up in bad
situations which can be avoided with good engineering. Is not fun to
debug a dictionary of string to dmlc::any in C++. It's basically just
one level above having to decode machine instructions and hexadecimal
dumps from memory, and we are in 2019, we have tools.

As someone who is supporting MXNet use-cases in production as well as
developing new features, I will say that we are spending too many
efforts in some cases derived from deficiencies in these areas which
can be better spent advancing the SOTA in TVM or adding features to
MXNet.

Taking a high level view of the issue, I don't think right now is
beneficial for either project to be co-dependent. I think in TVM and
NNVM2  you want to iterate and experiment fast and in MXNet you want
to bias towards stability and maintainability, the speed and agility
is naturally going to be different.  In an analogy to programming
languages, MXNet would start to become the Java platform and TVM is
Haskell...  I'm not saying that we should or should not use NNVM2 in
the future. But this is not something that should be sneaked into
MXNet through a sub-repository without discussion, planning and proper
testing.

I have extensively (re)read through Relay, TVM papers, including it's
references. As it stands today, the goals of the TVM project are
different than the goals of MXNet and the design choices and
constraints diverge:

Some of the points you make are surprising to me when I look at the
codebase as a non-PMC member:

Dynamic language support is implemented through the C++ API and
doesn't require dynamic attributes in the graph, could you come with
an example where any modifcation towards a different graph
implementation would affect the bindings of the dynamic languages for
MXNet?

Mental burden of templates: I have never seen so much reliance on
template magic in any other project than MXNet. I don't think for any
of the MXNet developers is difficult to understand a Node class passed
as a template argument to a graph.

TVM is selling typing and pure functional IR, even though for MXNet
developers this is dismissed as a nit and a matter of engineering
taste.

Also, how relevant will be having the graph mutated through a dynamic
language when some of the deep learning community is leaning towards
adding differentiable programming to static languages like Swift?
When you have the hammer of a dynamic language everything looks like a
dictionary of strings.

There is ZERO unit tests for those critical code paths and classes in
NNVM. And no, the end to end python tests don't count as unit tests
for a C++ class without bindings in my book.

Happy weekend.

Pedro.



On Tue, May 14, 2019 at 8:03 PM Tianqi Chen  wrote:
>
> The core part of the proposal is to move the graph to be much more strongly
> typed template class.
> I think this is mainly a point of engineering taste, and both sides have
> pros and cons, let me list them before I share my thoughts on this issue:
>
> - Typed fields certainly enjoy more compile-time type checking, on the
> other hand, it is hard to expose
>template of explosive possibilities to frontend languages.
> - More type-erased fields provide runtime flexibility to store polymorphic
> types as well as extensible attributes for graph optimization
>   - It is hard to use a virtual class to expose every possible attribute
> that an operator might have, such as inlining, storage pattern, gradient
> etc..
>   - The nature of supporting a growing set of operator attribute requires a
> type-erased attrs field.
> - In contrast to your argument(typing is a blocker to features),
> type-erased or typed code can both get to the same feature except, except
> that
>   typed code gets more compile-time errors while type-erased get some of
> them in runtime.
> - Templatized data structures will likely introduce additional metal
> burdens to developers and are not really suitable as a core data structure
>- Because they imply an explosive number of possible data structures,
> while the core data structure should be a single one.
>
> Now my view(as an MXNet PMC member) o

Re: Jenkins possibly compromised

2019-05-16 Thread Marco de Abreu
Hey again,

the webhook has successfully been re-activated and thus all PRs should be
processed again. As part of the process, we had to retrigger all jobs which
will re-evaluate all PRs and make a huge queue on our side. Autoscaling
will kick in and try to chew through all jobs, but it will take a few hours
none the less to process the queue. In a few hours, everything should have
normalized and we will check back to make sure we're running fine. Until
then, we are asking for you to be patient and would to excuse caused
inconveniences.

Best regards,
Marco

On Thu, May 16, 2019 at 3:15 PM Marco de Abreu 
wrote:

> Hello dev@,
>
> we noticed some fishy logs on our Jenkins and are afraid that a successful
> attack might have happened. Thus, we are taking security precautions and
> rotating all credentials and revoking active session.
>
> As part of this process, we have to rotate the GitHub webhook secret. This
> update requires communication with Apache Infra, which will render Jenkins
> unusable for PR verification until the secret has been updated on Apache
> Infras side.
>
> Please excuse any inconveniences this may have caused.
>
> Best regards,
> Marco
>


Re: [DISCUSS] 1.5.0 Release Plan

2019-05-15 Thread Junru Shao
Hi folks,

Here I may have a release blocker for 1.5.0 about implementation of dynamic
shape mechanism, which somehow conflicts with Gluon's deferred
initialization [1].

[1] https://github.com/dmlc/gluon-nlp/issues/706

On Wed, May 15, 2019 at 12:09 PM Anirudh Subramanian 
wrote:

> Hi Lai,
>
> From the discussion I had with Nvidia offline they are targeting on pushing
> the required changes today.
> Since this is important feature for the release, if this gets delayed and
> cannot  be merged by 05/17/2019,
> the code freeze date may need to be changed.
>
> Anirudh
>
> On Wed, May 15, 2019 at 1:23 AM Lv, Tao A  wrote:
>
> > Hi dev,
> >
> > We see there are several github issues [1][2][3][4] about mxnet windows
> > build experience. The team is working intensively [5][6][7] on that to
> fix
> > some problems of MKL-DNN build on windows. We hope these fixes can catch
> > the code freeze and finally enter the 1.5.0 release.
> >
> > The PR against mshadow (#374) was already merged and MXNet PR #14877 is
> > under review - great thanks to CI team for helping on the MKL
> installation
> > request. PR #14952 is document change according to build logic changes in
> > PR #14877. So I think these two PRs should be merged simultaneously.
> > Currently #14877 is experiencing a CI response problem.
> >
> > Please take your time to have a look at these two PRs. Your comments and
> > suggestions are highly appreciated.
> >
> > Thanks,
> > -tao
> >
> > [1] https://github.com/apache/incubator-mxnet/issues/14670
> > [2] https://github.com/apache/incubator-mxnet/issues/14335
> > [3] https://github.com/apache/incubator-mxnet/issues/14203
> > [4] https://github.com/apache/incubator-mxnet/issues/14085
> > [5] https://github.com/apache/incubator-mxnet/pull/14877
> > [6] https://github.com/dmlc/mshadow/pull/374
> > [7] https://github.com/apache/incubator-mxnet/pull/14952
> >
> > -Original Message-
> > From: Lai Wei [mailto:roywei...@gmail.com]
> > Sent: Wednesday, May 15, 2019 2:57 PM
> > To: dev@mxnet.incubator.apache.org
> > Subject: Re: [DISCUSS] 1.5.0 Release Plan
> >
> > Hi Anirudh,
> >
> > I see there was an offline disucssion
> > <
> >
> https://github.com/apache/incubator-mxnet/pull/14173#pullrequestreview-235846341
> > >
> > and I have updated the AMP feature and your project on the release
> tracker
> > <
> >
> https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status
> > >
> > ,
> > Please let me know if you have any updates.
> >
> > Hi @dev,
> > This is a gentle reminder that  the code freeze for 1.5.0 release is on
> > 05/17/2019, please let us know if you have any WIP pull requests aiming
> for
> > 1.5.0 that needs attention.
> > Please understand we already have around 650 commits in master that need
> > to be released in time. We understand TensorRT test in CI is failing and
> > are trying to fix it. Meanwhile please update the tracker if there is any
> > change:
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status
> >
> > Thanks!
> >
> > Lai
> >
> >
> > On Wed, May 8, 2019 at 11:58 AM Anirudh Subramanian <
> anirudh2...@gmail.com
> > >
> > wrote:
> >
> > > Hi Sheng,
> > >
> > > I had a discussion with nvidia folks offline today (@ptrendx et. al.).
> > > I strongly feel that the AMP feature should be included as part of the
> > > release: https://github.com/apache/incubator-mxnet/pull/14173 .
> > > The PR is aimed for completion for next week but reviews and RFC
> > > discussions may take some time. I would request to extend the release
> > > code freeze by 2 weeks.
> > > Also, I would like to include
> > >
> > > https://cwiki.apache.org/confluence/display/MXNET/Conversion+from+FP32
> > > +to+Mixed+Precision+Models
> > > which
> > > depends on the AMP PR.
> > > I am also aiming for adding a PR by this week end or early next week,
> > > but reviews will take longer than May 17th.
> > >
> > > Anirudh
> > >
> > >
> > > On Mon, May 6, 2019 at 11:49 PM Sheng Zha  wrote:
> > >
> > > > Hi,
> > > >
> > > > While 1.4.1 vote on general@incubator is still on going, I’d like to
> > > > propose that we start preparing 1.5.0 release.
> > > >
> > > > 1.5.0 will include changes that dates back to last year and there
> 

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Junru Shao
Hi Zach,

Thank you for raising these points! I am happy to offer more reading
materials about this topic.

*SSA vs ANF.* ANF and SSA are essentially the same thing [1].

*AD in Relay.* Relay is able to do AD through not only control flow, but
also various data structures and higher-order functjon [2].

[1] Appel, Andrew W. "SSA is functional programming." *ACM SIGPLAN
Notices* 33.4
(1998): 17-20.
[2] Roesch, Jared, et al. "Relay: a new IR for machine learning
frameworks." *Proceedings of the 2nd ACM SIGPLAN International Workshop on
Machine Learning and Programming Languages*. ACM, 2018.


On Wed, May 15, 2019 at 12:01 PM Zach Kimberg 
wrote:

> I would like to raise another option to get back on the topic of changing
> the Operator graph structure. On the page discussing Relay IR [1], it
> discusses mainly the difference between a data flow graph like we use now
> and A-normal [2] which is used in some functional compilers. Is there a
> reason we do not want to use a structure based on Single Static Assignment
> Form (wikipedia explanation [3], lecture note explanation [4]). It is used
> almost universally in the compiler community including in LLVM (clang),
> GCC, Oracle JVM, PyPy, Go, Webkit, and Swift [5]. The major reason behind
> it's pervasiveness is that it has proven very effective for analysis and
> transformations when dealing with control flow.
>
> One possible concern is that it might make automatic differentiation more
> difficult [6]. While it certainly is more complicated than a pure
> functional approach, the functional approach requires users to use
> functional programming. Especially with the languages we support now, that
> doesn't seem like a reasonable assumption. Given that the users are already
> introducing the complexity inherent in imperative programming, we have to
> deal with the increased complexity regardless. I think it might be easier
> to have the tools to deal with that rather than attempting to coerce users
> into a different programming paradigm or convert code between paradigms.
> Furthermore, this may become more important if users are increasingly
> making use of control flow like Junru said.
>
> Zach
>
>
> [1] - https://docs.tvm.ai/dev/relay_intro.html
> [2] - https://en.wikipedia.org/wiki/A-normal_form
> [3] - https://en.wikipedia.org/wiki/Static_single_assignment_form
> [4] - https://www.cs.cmu.edu/~rjsimmon/15411-f15/lec/10-ssa.pdf
> [5] -
>
> https://en.wikipedia.org/wiki/Static_single_assignment_form#Compilers_using_SSA_form
> [6] - https://discuss.tvm.ai/t/choice-about-ir-ssa-or-anf/1757/2
>
> On Wed, May 15, 2019 at 11:51 AM Naveen Swamy  wrote:
>
> > Being dismissive and condescending has been exactly what is plaguing this
> > project.
> >
> > I agree the last paragraph sounds very condescending and very dismissive
> > and it breaks many code of conducts listed.
> >
> > On Wed, May 15, 2019 at 11:31 AM Anirudh Subramanian <
> > anirudh2...@gmail.com>
> > wrote:
> >
> > > Hi Junru,
> > >
> > > Overall, I appreciate the points you made about the proposal.
> > >
> > > Having said that, I would like to remind the Apache Code of Conduct :
> > > https://www.apache.org/foundation/policies/conduct.
> > > "Be empathetic, welcoming, friendly and patient".
> > >
> > > I find your tone condescending. Clearly you understand what he meant
> from
> > > the context whether you prefer to call IR in compilers or data-flow in
> > > distributed systems. You could very well say lets use this terminology
> to
> > > have a common understanding instead of saying go learn the basic
> > concepts.
> > > Before building a cool brand, its important to build a healthy
> community.
> > >
> > > Anirudh
> > >
> > >
> > > On Wed, May 15, 2019 at 12:03 AM Junru Shao 
> > > wrote:
> > >
> > > > Hi Pedro,
> > > >
> > > > I really appreciate that a diligent and talented engineer eagerly
> wants
> > > to
> > > > improve our system, and am very thankful that you have done so much
> for
> > > our
> > > > community. However, I do want to mention some points that I believe I
> > > > should mention.
> > > >
> > > > While I agree with Tianqi that every design has its pros and cons, I
> > > would
> > > > love to emphasize that a *good taste* of system design is to optimize
> > the
> > > > bottleneck, enhance expressiveness (and usability), i.e. to do what
> > needs
> > > > doing, rather than *trivial nits* that are irrel

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Tianqi Chen
gt; > > > I really appreciate that a diligent and talented engineer eagerly
> wants
> > > to
> > > > improve our system, and am very thankful that you have done so much
> for
> > > our
> > > > community. However, I do want to mention some points that I believe I
> > > > should mention.
> > > >
> > > > While I agree with Tianqi that every design has its pros and cons, I
> > > would
> > > > love to emphasize that a *good taste* of system design is to optimize
> > the
> > > > bottleneck, enhance expressiveness (and usability), i.e. to do what
> > needs
> > > > doing, rather than *trivial nits* that are irrelevant to either
> > > performance
> > > > or expressiveness. Generally speaking, typed or untyped, shared_ptr
> or
> > > > unique_ptr, won't affect the overall performance when it comes to
> deep
> > > > learning workload, specially when we have an async scheduler that
> does
> > > good
> > > > latency hiding in MXNet - to me, these are not major issues that are
> > > worth
> > > > re-designing our entire system.
> > > >
> > > > To benefit users - real-world ML practitioners, the most thing I
> would
> > > love
> > > > to mention is that dataflow graph-based representation is
> increasingly
> > > > incapable of modern neural networks, because the increasingly
> appeared
> > > > structures like arbitrary control flow (w/ continue, break, etc),
> > > > recursion, type conjunction and disjunction, etc. These issues will
> be
> > > our
> > > > priority to address, which is brought by Relay, which addresses all
> > these
> > > > pain points.
> > > >
> > > > Another minor thing I would love to humbly mention is that, for sake
> of
> > > our
> > > > brand, it is our responsibility to be professional about
> terminologies
> > > when
> > > > writing an official proposal on Confluence. As one of the numerous
> > > > examples, the title of the proposal really shocks me for a while,
> > > something
> > > > like "operators graph" blah blah so weird. Educate me if I were
> wrong,
> > > but
> > > > compiler community would prefer the term "intermediate
> representation",
> > > and
> > > > distributed system community would prefer "dataflow graph". If you
> > don't
> > > > have knowledge in these fields, a better way for efficient
> > communication
> > > is
> > > > to get yourself first familiarize the most basic concepts and then do
> > > > discussion. This is a way to save your own valuable time as well.
> > > >
> > > > Again, thank you so much for your hard work, and hope that we could
> > work
> > > > together to win customers in the future :-)
> > > >
> > > > Thanks,
> > > > Junru
> > > >
> > > >
> > > > On Tue, May 14, 2019 at 8:03 PM Tianqi Chen <
> tqc...@cs.washington.edu>
> > > > wrote:
> > > >
> > > > > The core part of the proposal is to move the graph to be much more
> > > > strongly
> > > > > typed template class.
> > > > > I think this is mainly a point of engineering taste, and both sides
> > > have
> > > > > pros and cons, let me list them before I share my thoughts on this
> > > issue:
> > > > >
> > > > > - Typed fields certainly enjoy more compile-time type checking, on
> > the
> > > > > other hand, it is hard to expose
> > > > >template of explosive possibilities to frontend languages.
> > > > > - More type-erased fields provide runtime flexibility to store
> > > > polymorphic
> > > > > types as well as extensible attributes for graph optimization
> > > > >   - It is hard to use a virtual class to expose every possible
> > > attribute
> > > > > that an operator might have, such as inlining, storage pattern,
> > > gradient
> > > > > etc..
> > > > >   - The nature of supporting a growing set of operator attribute
> > > > requires a
> > > > > type-erased attrs field.
> > > > > - In contrast to your argument(typing is a blocker to features),
> > > > > type-erased or typed code can both get to the same feature except

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Pedro Larroy
Hi

Thanks for all the materials and keypoints raised. The discussion has
many ramifications, I will think about them and research them very
carefully before replying further. Please also don't quickly dismiss
the points I have raised and reduce them to typed vs untyped or
pedantic C++ comments, we have been debugging missing nodes and
pointers in the graph when doing second order gradient for weeks with
no success due to the design of the graph.

There's 60 years of software development learnings and practice behind
some concepts, and compiler theory that deep learning frameworks can
also take advantage of instead of rediscovering everything again until
we end up in a typed pure functional IR.
In some of the materials linked you also point out limitations of the
current architecture. I think it's good that we raise this topic and
it shows that we need to have a deeper and structured conversation on
how we evolve the dataflow graph in MXNet. Maybe you can help cross
polinizing this conversation between the TVM and MXNet project. If
there's an intention to change from NNVM to NNVM2 I think this should
have been communicated or discussed with the community before.

Until then.

Pedro.




On Tue, May 14, 2019 at 8:03 PM Tianqi Chen  wrote:
>
> The core part of the proposal is to move the graph to be much more strongly
> typed template class.
> I think this is mainly a point of engineering taste, and both sides have
> pros and cons, let me list them before I share my thoughts on this issue:
>
> - Typed fields certainly enjoy more compile-time type checking, on the
> other hand, it is hard to expose
>template of explosive possibilities to frontend languages.
> - More type-erased fields provide runtime flexibility to store polymorphic
> types as well as extensible attributes for graph optimization
>   - It is hard to use a virtual class to expose every possible attribute
> that an operator might have, such as inlining, storage pattern, gradient
> etc..
>   - The nature of supporting a growing set of operator attribute requires a
> type-erased attrs field.
> - In contrast to your argument(typing is a blocker to features),
> type-erased or typed code can both get to the same feature except, except
> that
>   typed code gets more compile-time errors while type-erased get some of
> them in runtime.
> - Templatized data structures will likely introduce additional metal
> burdens to developers and are not really suitable as a core data structure
>- Because they imply an explosive number of possible data structures,
> while the core data structure should be a single one.
>
> Now my view(as an MXNet PMC member) on typed vs type-erased style: If MXNet
> is a pure C++ project, I might take more of the typed approach.
> However, MXNet itself is a project that takes python/scala/clojure and
> other frontend languages.
> The introduction of more typing may not align with the original goal as the
> tradeoffs I listed above.
>
> This proposal is really a drastic change of what NNVM does, as well as the
> optimization passes, and given the scope, in your analogy, "a new vehicle
> to solve all the problems"
> rather than a minor patch. It will take a lot of engineering effort to
> bring in new features and adapting the existing ones.
> Because of that, it does merit a discussion about how shall we think about
> the future MXNet2.0.
>
> Technically Relay is a serious candidate. Of course relay, as well as its
> core, is in C++ but maintains the multi-language first principle, that is
> why the example code was in python.
> See more related discussion comparing NNVMv1 and relay:
> https://discuss.tvm.ai/t/any-materials-of-relay-for-beginners/2392/5
>
> I think the ideal graph data structure candidate for MXNet2.0 should have
> natural support for:
> - Native support of function, module, and recursions
> - Control flows
> - The ability of interpolation with multi-language frontend, e.g. being
> able to prototype graph optimizations in python/scala/clojure if needed.
>
> Adding these support needs significant engineering effort, and I do hope we
> only have to do it once. While I don't want to force any conclusion here,
> I do think Relay is one such candidate.
>
> Tianqi
>
>
> On Tue, May 14, 2019 at 5:58 PM Pedro Larroy 
> wrote:
>
> > Hi Tianqi
> >
> > Thanks for the quick response.
> >
> > Could you point to examples where graph.h is being exposed which would
> > not be possible with what I propose? I don't think my proposal is
> > having any impact in language bindings, and the way I describe it
> > doesn't affect having or not having higher language bindings. Please
> > elaborate so I can understand your concern.  Maybe code examples where
> > the graph attributes are being changed from Python?  I don't think we
> > have this on MXNet. This is such a core foundation for MXNet, that I
> > don't think we should compromise on it because other project not
> > directly related to MXNet might want to expose some untyped 

Re: Python2 End of Life

2019-05-15 Thread Damien Stanton
+1 Standardizing on Python 3 will make things easier for both MXNet devs as
well as users.

On Wed, May 15, 2019 at 2:49 PM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> +1 Thanks for bringing this up Zach.
> Can we include this intent to deprecate support for Python 2, in the
> upcoming MXNet 1.5 release? This will help MXNet community to have enough
> advance notification of proposed plan.
>
> Best,
> Sandeep
>
> On Wed, May 15, 2019 at 11:29 AM Zach Kimberg 
> wrote:
>
> > The website I listed earlier (https://python3statement.org/) is backed
> by
> > a
> > git repository (
> > https://github.com/python3statement/python3statement.github.io) so that
> > projects can open a PR to add themselves to the list. Beyond that, they
> > also have a very nice timeline that projects can add themselves to which
> > details when their support ends. This might be a good first place to
> check
> > for knowing which dependencies might affect us. Here are some of the
> > notable projects and their support that are in the timeline:
> >
> > Projects currently Python3 only: pandas, scikit-learn
> > Projects dropping support betweeen now and Jan 1: IPython, XGBoost, rpy2,
> > dateutil
> > Projects dropping support on Jan 1: CPython, Numpy, Pillow, Scipy,
> > matplotlib, Spyder
> >
> > My hope is that following this discussion, we decide on a timeline and
> add
> > ourselves to this site as well. Does anyone disagree with the choice of
> Jan
> > 1?
> >
> > On Wed, May 15, 2019 at 2:40 AM Marco de Abreu 
> > wrote:
> >
> > > +1
> > >
> > > I'd like to point out that one of our dependencies, scikit, already
> > dropped
> > > support for python 2. If more dependencies drop support before 1.1.20,
> we
> > > might start running into further issues like we already did. As part of
> > > that decision, I'd propose to see what the detailed timelines of our
> > > dependencies are and then adjust our timeline accordingly.
> > >
> > > -Marco
> > >
> > > Pedro Larroy  schrieb am Mi., 15. Mai
> > 2019,
> > > 00:15:
> > >
> > > > +1  Let python2 rest, let's simplify our infrastructure and need to
> > > > support old Python versions.
> > > >
> > > > On Mon, May 13, 2019 at 1:58 PM Jake Lee  wrote:
> > > > >
> > > > > +1 Recently I upgraded the Numpy version and found out that Pylint
> > had
> > > > > false alarm on it. The Pylint fix is only available on Python3. So
> I
> > > > > changed the default python version of 'make pylint' command to
> > python3
> > > > (PR
> > > > > haven't been merged). It's time to drop support for Python2.
> > > > >
> > > > > On Mon, May 13, 2019 at 1:37 PM Junru Shao <
> junrushao1...@gmail.com>
> > > > wrote:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > On Mon, May 13, 2019 at 1:34 PM Aaron Markham <
> > > > aaron.s.mark...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > +1 for the pledge and to start moving things to Python 3.
> > > > > > > I think our installation instructions and tutorials can be
> > updated
> > > to
> > > > > > > default to Python3 and we should update Python2-only
> tutorials. I
> > > > know
> > > > > > > we have a handful of those, and when I spot them, I'll create
> an
> > > > > > > issue.
> > > > > > > I can also look at migrating the docs build to Python 3.
> > > > > > > Should we add a new label for issues relating to migrating to
> > > > Python3?
> > > > > > > Cheers,
> > > > > > > Aaron
> > > > > > >
> > > > > > > On Mon, May 13, 2019 at 12:04 PM Zach Kimberg <
> > > > zachary.kimb...@gmail.com
> > > > > > >
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > Right now, the official date for ending support for Python
> 2.7
> > > > (and all
> > > > > > > of
> > > > > > > > python2) is set to January 1 [1]. As part of it, a number of
> > > > projects
> > > > > > > have
> > > > > > > > pledged to drop support for Python2 in or before 2020
> including
> > > > > > > Tensorflow,
> > > > > > > > requests, pandas, ipython, numpy, pillow, and Cython [2]. I
> > > > believe we
> > > > > > > > should also join in this pledge on python3statement.org [2]
> > > > because it
> > > > > > > > would help clean up our project and it would be difficult to
> > > > continue
> > > > > > > > supporting Python2 anyway when some of our dependencies are
> > > > dropping
> > > > > > > > support.
> > > > > > > >
> > > > > > > > As a concrete step, we should decide on a date to remove all
> > > > usages of
> > > > > > > > Python2 from our CI and consider that officially dropping
> > > support.
> > > > > > > > Following that, we can expect PRs will end up breaking
> support
> > > for
> > > > > > > Python2.
> > > > > > > > I suggest just using the same date that Python is dropping
> > > support
> > > > of
> > > > > > > > January 1. We may also need to update some examples or
> scripts
> > > that
> > > > > > were
> > > > > > > > written only for python2 that are around the project. Any
> > > thoughts?
> > > > > > > >
> > > > > > > > Zach
> > > > > > > >
> > > > > > > >
> > > > > > > > 

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Junru Shao
Hi Anirudh, Naveen,

Thank you so much for the gentle remainder!

I am not a native speaker and have resulted in the mistake. I would love to
say sincere sorry to Pedro. Pedro is working really hard for growing our
community and improving our code base. I sincerely apologize for what I
have said in a hurry.

Let’s work hard together to grow a healthy community!

Thanks,
Junru

On Wed, May 15, 2019 at 11:51 Naveen Swamy  wrote:

> Being dismissive and condescending has been exactly what is plaguing this
> project.
>
> I agree the last paragraph sounds very condescending and very dismissive
> and it breaks many code of conducts listed.
>
> On Wed, May 15, 2019 at 11:31 AM Anirudh Subramanian <
> anirudh2...@gmail.com>
> wrote:
>
> > Hi Junru,
> >
> > Overall, I appreciate the points you made about the proposal.
> >
> > Having said that, I would like to remind the Apache Code of Conduct :
> > https://www.apache.org/foundation/policies/conduct.
> > "Be empathetic, welcoming, friendly and patient".
> >
> > I find your tone condescending. Clearly you understand what he meant from
> > the context whether you prefer to call IR in compilers or data-flow in
> > distributed systems. You could very well say lets use this terminology to
> > have a common understanding instead of saying go learn the basic
> concepts.
> > Before building a cool brand, its important to build a healthy community.
> >
> > Anirudh
> >
> >
> > On Wed, May 15, 2019 at 12:03 AM Junru Shao 
> > wrote:
> >
> > > Hi Pedro,
> > >
> > > I really appreciate that a diligent and talented engineer eagerly wants
> > to
> > > improve our system, and am very thankful that you have done so much for
> > our
> > > community. However, I do want to mention some points that I believe I
> > > should mention.
> > >
> > > While I agree with Tianqi that every design has its pros and cons, I
> > would
> > > love to emphasize that a *good taste* of system design is to optimize
> the
> > > bottleneck, enhance expressiveness (and usability), i.e. to do what
> needs
> > > doing, rather than *trivial nits* that are irrelevant to either
> > performance
> > > or expressiveness. Generally speaking, typed or untyped, shared_ptr or
> > > unique_ptr, won't affect the overall performance when it comes to deep
> > > learning workload, specially when we have an async scheduler that does
> > good
> > > latency hiding in MXNet - to me, these are not major issues that are
> > worth
> > > re-designing our entire system.
> > >
> > > To benefit users - real-world ML practitioners, the most thing I would
> > love
> > > to mention is that dataflow graph-based representation is increasingly
> > > incapable of modern neural networks, because the increasingly appeared
> > > structures like arbitrary control flow (w/ continue, break, etc),
> > > recursion, type conjunction and disjunction, etc. These issues will be
> > our
> > > priority to address, which is brought by Relay, which addresses all
> these
> > > pain points.
> > >
> > > Another minor thing I would love to humbly mention is that, for sake of
> > our
> > > brand, it is our responsibility to be professional about terminologies
> > when
> > > writing an official proposal on Confluence. As one of the numerous
> > > examples, the title of the proposal really shocks me for a while,
> > something
> > > like "operators graph" blah blah so weird. Educate me if I were wrong,
> > but
> > > compiler community would prefer the term "intermediate representation",
> > and
> > > distributed system community would prefer "dataflow graph". If you
> don't
> > > have knowledge in these fields, a better way for efficient
> communication
> > is
> > > to get yourself first familiarize the most basic concepts and then do
> > > discussion. This is a way to save your own valuable time as well.
> > >
> > > Again, thank you so much for your hard work, and hope that we could
> work
> > > together to win customers in the future :-)
> > >
> > > Thanks,
> > > Junru
> > >
> > >
> > > On Tue, May 14, 2019 at 8:03 PM Tianqi Chen 
> > > wrote:
> > >
> > > > The core part of the proposal is to move the graph to be much more
> > > strongly
> > > > typed template class.
> > > > I think this is mainly a point of

Re: [DISCUSS] 1.5.0 Release Plan

2019-05-15 Thread Anirudh Subramanian
Hi Lai,

>From the discussion I had with Nvidia offline they are targeting on pushing
the required changes today.
Since this is important feature for the release, if this gets delayed and
cannot  be merged by 05/17/2019,
the code freeze date may need to be changed.

Anirudh

On Wed, May 15, 2019 at 1:23 AM Lv, Tao A  wrote:

> Hi dev,
>
> We see there are several github issues [1][2][3][4] about mxnet windows
> build experience. The team is working intensively [5][6][7] on that to fix
> some problems of MKL-DNN build on windows. We hope these fixes can catch
> the code freeze and finally enter the 1.5.0 release.
>
> The PR against mshadow (#374) was already merged and MXNet PR #14877 is
> under review - great thanks to CI team for helping on the MKL installation
> request. PR #14952 is document change according to build logic changes in
> PR #14877. So I think these two PRs should be merged simultaneously.
> Currently #14877 is experiencing a CI response problem.
>
> Please take your time to have a look at these two PRs. Your comments and
> suggestions are highly appreciated.
>
> Thanks,
> -tao
>
> [1] https://github.com/apache/incubator-mxnet/issues/14670
> [2] https://github.com/apache/incubator-mxnet/issues/14335
> [3] https://github.com/apache/incubator-mxnet/issues/14203
> [4] https://github.com/apache/incubator-mxnet/issues/14085
> [5] https://github.com/apache/incubator-mxnet/pull/14877
> [6] https://github.com/dmlc/mshadow/pull/374
> [7] https://github.com/apache/incubator-mxnet/pull/14952
>
> -Original Message-
> From: Lai Wei [mailto:roywei...@gmail.com]
> Sent: Wednesday, May 15, 2019 2:57 PM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [DISCUSS] 1.5.0 Release Plan
>
> Hi Anirudh,
>
> I see there was an offline disucssion
> <
> https://github.com/apache/incubator-mxnet/pull/14173#pullrequestreview-235846341
> >
> and I have updated the AMP feature and your project on the release tracker
> <
> https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status
> >
> ,
> Please let me know if you have any updates.
>
> Hi @dev,
> This is a gentle reminder that  the code freeze for 1.5.0 release is on
> 05/17/2019, please let us know if you have any WIP pull requests aiming for
> 1.5.0 that needs attention.
> Please understand we already have around 650 commits in master that need
> to be released in time. We understand TensorRT test in CI is failing and
> are trying to fix it. Meanwhile please update the tracker if there is any
> change:
>
> https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status
>
> Thanks!
>
> Lai
>
>
> On Wed, May 8, 2019 at 11:58 AM Anirudh Subramanian  >
> wrote:
>
> > Hi Sheng,
> >
> > I had a discussion with nvidia folks offline today (@ptrendx et. al.).
> > I strongly feel that the AMP feature should be included as part of the
> > release: https://github.com/apache/incubator-mxnet/pull/14173 .
> > The PR is aimed for completion for next week but reviews and RFC
> > discussions may take some time. I would request to extend the release
> > code freeze by 2 weeks.
> > Also, I would like to include
> >
> > https://cwiki.apache.org/confluence/display/MXNET/Conversion+from+FP32
> > +to+Mixed+Precision+Models
> > which
> > depends on the AMP PR.
> > I am also aiming for adding a PR by this week end or early next week,
> > but reviews will take longer than May 17th.
> >
> > Anirudh
> >
> >
> > On Mon, May 6, 2019 at 11:49 PM Sheng Zha  wrote:
> >
> > > Hi,
> > >
> > > While 1.4.1 vote on general@incubator is still on going, I’d like to
> > > propose that we start preparing 1.5.0 release.
> > >
> > > 1.5.0 will include changes that dates back to last year and there
> > > has
> > been
> > > a lot of new features and improvements in it, so it will likely time
> > > us more time to prepare than 1.4.1. I propose the following timeline:
> > > - Cut release branch: release branch already cut. Will sync with
> > > master branch on 5/15/2019 EOD.
> > > - Code freeze: 5/17/2019. No more changes unless the release branch
> > > is in a broken state.
> > > - Tag and vote: 5/20/2019 onward.
> > >
> > > Lai Wei (roywei@) expressed to me offline that he’s willing to help
> > drive
> > > this release as release manager, and I’m happy to help again as
> > committer.
> > >
> > > If you have features in progress that you’d like to include in 1.5.0:
> > > - Add your feature to the scope:
> > >
> > https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+a
> > nd+Status
> > > - Indicate in this thread:
> > >   - how confident you are about making it happen before the code
> freeze.
> > > If not confident, provide estimate for a more manageable code freeze
> > > date so that people can discuss whether to extend the deadline or to
> > > skip one release for it.
> > > - whether your PR requires more attention to make it happen.
> > >
> > > Thanks for your attention. Comments and suggestions are also welcome.
> > >
> > > -sz
> >
>


Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Zach Kimberg
I would like to raise another option to get back on the topic of changing
the Operator graph structure. On the page discussing Relay IR [1], it
discusses mainly the difference between a data flow graph like we use now
and A-normal [2] which is used in some functional compilers. Is there a
reason we do not want to use a structure based on Single Static Assignment
Form (wikipedia explanation [3], lecture note explanation [4]). It is used
almost universally in the compiler community including in LLVM (clang),
GCC, Oracle JVM, PyPy, Go, Webkit, and Swift [5]. The major reason behind
it's pervasiveness is that it has proven very effective for analysis and
transformations when dealing with control flow.

One possible concern is that it might make automatic differentiation more
difficult [6]. While it certainly is more complicated than a pure
functional approach, the functional approach requires users to use
functional programming. Especially with the languages we support now, that
doesn't seem like a reasonable assumption. Given that the users are already
introducing the complexity inherent in imperative programming, we have to
deal with the increased complexity regardless. I think it might be easier
to have the tools to deal with that rather than attempting to coerce users
into a different programming paradigm or convert code between paradigms.
Furthermore, this may become more important if users are increasingly
making use of control flow like Junru said.

Zach


[1] - https://docs.tvm.ai/dev/relay_intro.html
[2] - https://en.wikipedia.org/wiki/A-normal_form
[3] - https://en.wikipedia.org/wiki/Static_single_assignment_form
[4] - https://www.cs.cmu.edu/~rjsimmon/15411-f15/lec/10-ssa.pdf
[5] -
https://en.wikipedia.org/wiki/Static_single_assignment_form#Compilers_using_SSA_form
[6] - https://discuss.tvm.ai/t/choice-about-ir-ssa-or-anf/1757/2

On Wed, May 15, 2019 at 11:51 AM Naveen Swamy  wrote:

> Being dismissive and condescending has been exactly what is plaguing this
> project.
>
> I agree the last paragraph sounds very condescending and very dismissive
> and it breaks many code of conducts listed.
>
> On Wed, May 15, 2019 at 11:31 AM Anirudh Subramanian <
> anirudh2...@gmail.com>
> wrote:
>
> > Hi Junru,
> >
> > Overall, I appreciate the points you made about the proposal.
> >
> > Having said that, I would like to remind the Apache Code of Conduct :
> > https://www.apache.org/foundation/policies/conduct.
> > "Be empathetic, welcoming, friendly and patient".
> >
> > I find your tone condescending. Clearly you understand what he meant from
> > the context whether you prefer to call IR in compilers or data-flow in
> > distributed systems. You could very well say lets use this terminology to
> > have a common understanding instead of saying go learn the basic
> concepts.
> > Before building a cool brand, its important to build a healthy community.
> >
> > Anirudh
> >
> >
> > On Wed, May 15, 2019 at 12:03 AM Junru Shao 
> > wrote:
> >
> > > Hi Pedro,
> > >
> > > I really appreciate that a diligent and talented engineer eagerly wants
> > to
> > > improve our system, and am very thankful that you have done so much for
> > our
> > > community. However, I do want to mention some points that I believe I
> > > should mention.
> > >
> > > While I agree with Tianqi that every design has its pros and cons, I
> > would
> > > love to emphasize that a *good taste* of system design is to optimize
> the
> > > bottleneck, enhance expressiveness (and usability), i.e. to do what
> needs
> > > doing, rather than *trivial nits* that are irrelevant to either
> > performance
> > > or expressiveness. Generally speaking, typed or untyped, shared_ptr or
> > > unique_ptr, won't affect the overall performance when it comes to deep
> > > learning workload, specially when we have an async scheduler that does
> > good
> > > latency hiding in MXNet - to me, these are not major issues that are
> > worth
> > > re-designing our entire system.
> > >
> > > To benefit users - real-world ML practitioners, the most thing I would
> > love
> > > to mention is that dataflow graph-based representation is increasingly
> > > incapable of modern neural networks, because the increasingly appeared
> > > structures like arbitrary control flow (w/ continue, break, etc),
> > > recursion, type conjunction and disjunction, etc. These issues will be
> > our
> > > priority to address, which is brought by Relay, which addresses all
> these
> > > pain points.
> > >
> > > Another minor thing I wou

Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Naveen Swamy
Being dismissive and condescending has been exactly what is plaguing this
project.

I agree the last paragraph sounds very condescending and very dismissive
and it breaks many code of conducts listed.

On Wed, May 15, 2019 at 11:31 AM Anirudh Subramanian 
wrote:

> Hi Junru,
>
> Overall, I appreciate the points you made about the proposal.
>
> Having said that, I would like to remind the Apache Code of Conduct :
> https://www.apache.org/foundation/policies/conduct.
> "Be empathetic, welcoming, friendly and patient".
>
> I find your tone condescending. Clearly you understand what he meant from
> the context whether you prefer to call IR in compilers or data-flow in
> distributed systems. You could very well say lets use this terminology to
> have a common understanding instead of saying go learn the basic concepts.
> Before building a cool brand, its important to build a healthy community.
>
> Anirudh
>
>
> On Wed, May 15, 2019 at 12:03 AM Junru Shao 
> wrote:
>
> > Hi Pedro,
> >
> > I really appreciate that a diligent and talented engineer eagerly wants
> to
> > improve our system, and am very thankful that you have done so much for
> our
> > community. However, I do want to mention some points that I believe I
> > should mention.
> >
> > While I agree with Tianqi that every design has its pros and cons, I
> would
> > love to emphasize that a *good taste* of system design is to optimize the
> > bottleneck, enhance expressiveness (and usability), i.e. to do what needs
> > doing, rather than *trivial nits* that are irrelevant to either
> performance
> > or expressiveness. Generally speaking, typed or untyped, shared_ptr or
> > unique_ptr, won't affect the overall performance when it comes to deep
> > learning workload, specially when we have an async scheduler that does
> good
> > latency hiding in MXNet - to me, these are not major issues that are
> worth
> > re-designing our entire system.
> >
> > To benefit users - real-world ML practitioners, the most thing I would
> love
> > to mention is that dataflow graph-based representation is increasingly
> > incapable of modern neural networks, because the increasingly appeared
> > structures like arbitrary control flow (w/ continue, break, etc),
> > recursion, type conjunction and disjunction, etc. These issues will be
> our
> > priority to address, which is brought by Relay, which addresses all these
> > pain points.
> >
> > Another minor thing I would love to humbly mention is that, for sake of
> our
> > brand, it is our responsibility to be professional about terminologies
> when
> > writing an official proposal on Confluence. As one of the numerous
> > examples, the title of the proposal really shocks me for a while,
> something
> > like "operators graph" blah blah so weird. Educate me if I were wrong,
> but
> > compiler community would prefer the term "intermediate representation",
> and
> > distributed system community would prefer "dataflow graph". If you don't
> > have knowledge in these fields, a better way for efficient communication
> is
> > to get yourself first familiarize the most basic concepts and then do
> > discussion. This is a way to save your own valuable time as well.
> >
> > Again, thank you so much for your hard work, and hope that we could work
> > together to win customers in the future :-)
> >
> > Thanks,
> > Junru
> >
> >
> > On Tue, May 14, 2019 at 8:03 PM Tianqi Chen 
> > wrote:
> >
> > > The core part of the proposal is to move the graph to be much more
> > strongly
> > > typed template class.
> > > I think this is mainly a point of engineering taste, and both sides
> have
> > > pros and cons, let me list them before I share my thoughts on this
> issue:
> > >
> > > - Typed fields certainly enjoy more compile-time type checking, on the
> > > other hand, it is hard to expose
> > >template of explosive possibilities to frontend languages.
> > > - More type-erased fields provide runtime flexibility to store
> > polymorphic
> > > types as well as extensible attributes for graph optimization
> > >   - It is hard to use a virtual class to expose every possible
> attribute
> > > that an operator might have, such as inlining, storage pattern,
> gradient
> > > etc..
> > >   - The nature of supporting a growing set of operator attribute
> > requires a
> > > type-erased attrs field.
> > > - In contrast to your argument(typing is a blocker to features

Re: Python2 End of Life

2019-05-15 Thread sandeep krishnamurthy
+1 Thanks for bringing this up Zach.
Can we include this intent to deprecate support for Python 2, in the
upcoming MXNet 1.5 release? This will help MXNet community to have enough
advance notification of proposed plan.

Best,
Sandeep

On Wed, May 15, 2019 at 11:29 AM Zach Kimberg 
wrote:

> The website I listed earlier (https://python3statement.org/) is backed by
> a
> git repository (
> https://github.com/python3statement/python3statement.github.io) so that
> projects can open a PR to add themselves to the list. Beyond that, they
> also have a very nice timeline that projects can add themselves to which
> details when their support ends. This might be a good first place to check
> for knowing which dependencies might affect us. Here are some of the
> notable projects and their support that are in the timeline:
>
> Projects currently Python3 only: pandas, scikit-learn
> Projects dropping support betweeen now and Jan 1: IPython, XGBoost, rpy2,
> dateutil
> Projects dropping support on Jan 1: CPython, Numpy, Pillow, Scipy,
> matplotlib, Spyder
>
> My hope is that following this discussion, we decide on a timeline and add
> ourselves to this site as well. Does anyone disagree with the choice of Jan
> 1?
>
> On Wed, May 15, 2019 at 2:40 AM Marco de Abreu 
> wrote:
>
> > +1
> >
> > I'd like to point out that one of our dependencies, scikit, already
> dropped
> > support for python 2. If more dependencies drop support before 1.1.20, we
> > might start running into further issues like we already did. As part of
> > that decision, I'd propose to see what the detailed timelines of our
> > dependencies are and then adjust our timeline accordingly.
> >
> > -Marco
> >
> > Pedro Larroy  schrieb am Mi., 15. Mai
> 2019,
> > 00:15:
> >
> > > +1  Let python2 rest, let's simplify our infrastructure and need to
> > > support old Python versions.
> > >
> > > On Mon, May 13, 2019 at 1:58 PM Jake Lee  wrote:
> > > >
> > > > +1 Recently I upgraded the Numpy version and found out that Pylint
> had
> > > > false alarm on it. The Pylint fix is only available on Python3. So I
> > > > changed the default python version of 'make pylint' command to
> python3
> > > (PR
> > > > haven't been merged). It's time to drop support for Python2.
> > > >
> > > > On Mon, May 13, 2019 at 1:37 PM Junru Shao 
> > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Mon, May 13, 2019 at 1:34 PM Aaron Markham <
> > > aaron.s.mark...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > +1 for the pledge and to start moving things to Python 3.
> > > > > > I think our installation instructions and tutorials can be
> updated
> > to
> > > > > > default to Python3 and we should update Python2-only tutorials. I
> > > know
> > > > > > we have a handful of those, and when I spot them, I'll create an
> > > > > > issue.
> > > > > > I can also look at migrating the docs build to Python 3.
> > > > > > Should we add a new label for issues relating to migrating to
> > > Python3?
> > > > > > Cheers,
> > > > > > Aaron
> > > > > >
> > > > > > On Mon, May 13, 2019 at 12:04 PM Zach Kimberg <
> > > zachary.kimb...@gmail.com
> > > > > >
> > > > > > wrote:
> > > > > > >
> > > > > > > Right now, the official date for ending support for Python 2.7
> > > (and all
> > > > > > of
> > > > > > > python2) is set to January 1 [1]. As part of it, a number of
> > > projects
> > > > > > have
> > > > > > > pledged to drop support for Python2 in or before 2020 including
> > > > > > Tensorflow,
> > > > > > > requests, pandas, ipython, numpy, pillow, and Cython [2]. I
> > > believe we
> > > > > > > should also join in this pledge on python3statement.org [2]
> > > because it
> > > > > > > would help clean up our project and it would be difficult to
> > > continue
> > > > > > > supporting Python2 anyway when some of our dependencies are
> > > dropping
> > > > > > > support.
> > > > > > >
> > > > > > > As a concrete step, we should decide on a date to remove all
> > > usages of
> > > > > > > Python2 from our CI and consider that officially dropping
> > support.
> > > > > > > Following that, we can expect PRs will end up breaking support
> > for
> > > > > > Python2.
> > > > > > > I suggest just using the same date that Python is dropping
> > support
> > > of
> > > > > > > January 1. We may also need to update some examples or scripts
> > that
> > > > > were
> > > > > > > written only for python2 that are around the project. Any
> > thoughts?
> > > > > > >
> > > > > > > Zach
> > > > > > >
> > > > > > >
> > > > > > > [1] - https://www.python.org/dev/peps/pep-0373/
> > > > > > > [2] - https://python3statement.org/
> > > > > >
> > > > >
> > >
> >
>


-- 
Sandeep Krishnamurthy


Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Anirudh Subramanian
Hi Junru,

Overall, I appreciate the points you made about the proposal.

Having said that, I would like to remind the Apache Code of Conduct :
https://www.apache.org/foundation/policies/conduct.
"Be empathetic, welcoming, friendly and patient".

I find your tone condescending. Clearly you understand what he meant from
the context whether you prefer to call IR in compilers or data-flow in
distributed systems. You could very well say lets use this terminology to
have a common understanding instead of saying go learn the basic concepts.
Before building a cool brand, its important to build a healthy community.

Anirudh


On Wed, May 15, 2019 at 12:03 AM Junru Shao  wrote:

> Hi Pedro,
>
> I really appreciate that a diligent and talented engineer eagerly wants to
> improve our system, and am very thankful that you have done so much for our
> community. However, I do want to mention some points that I believe I
> should mention.
>
> While I agree with Tianqi that every design has its pros and cons, I would
> love to emphasize that a *good taste* of system design is to optimize the
> bottleneck, enhance expressiveness (and usability), i.e. to do what needs
> doing, rather than *trivial nits* that are irrelevant to either performance
> or expressiveness. Generally speaking, typed or untyped, shared_ptr or
> unique_ptr, won't affect the overall performance when it comes to deep
> learning workload, specially when we have an async scheduler that does good
> latency hiding in MXNet - to me, these are not major issues that are worth
> re-designing our entire system.
>
> To benefit users - real-world ML practitioners, the most thing I would love
> to mention is that dataflow graph-based representation is increasingly
> incapable of modern neural networks, because the increasingly appeared
> structures like arbitrary control flow (w/ continue, break, etc),
> recursion, type conjunction and disjunction, etc. These issues will be our
> priority to address, which is brought by Relay, which addresses all these
> pain points.
>
> Another minor thing I would love to humbly mention is that, for sake of our
> brand, it is our responsibility to be professional about terminologies when
> writing an official proposal on Confluence. As one of the numerous
> examples, the title of the proposal really shocks me for a while, something
> like "operators graph" blah blah so weird. Educate me if I were wrong, but
> compiler community would prefer the term "intermediate representation", and
> distributed system community would prefer "dataflow graph". If you don't
> have knowledge in these fields, a better way for efficient communication is
> to get yourself first familiarize the most basic concepts and then do
> discussion. This is a way to save your own valuable time as well.
>
> Again, thank you so much for your hard work, and hope that we could work
> together to win customers in the future :-)
>
> Thanks,
> Junru
>
>
> On Tue, May 14, 2019 at 8:03 PM Tianqi Chen 
> wrote:
>
> > The core part of the proposal is to move the graph to be much more
> strongly
> > typed template class.
> > I think this is mainly a point of engineering taste, and both sides have
> > pros and cons, let me list them before I share my thoughts on this issue:
> >
> > - Typed fields certainly enjoy more compile-time type checking, on the
> > other hand, it is hard to expose
> >template of explosive possibilities to frontend languages.
> > - More type-erased fields provide runtime flexibility to store
> polymorphic
> > types as well as extensible attributes for graph optimization
> >   - It is hard to use a virtual class to expose every possible attribute
> > that an operator might have, such as inlining, storage pattern, gradient
> > etc..
> >   - The nature of supporting a growing set of operator attribute
> requires a
> > type-erased attrs field.
> > - In contrast to your argument(typing is a blocker to features),
> > type-erased or typed code can both get to the same feature except, except
> > that
> >   typed code gets more compile-time errors while type-erased get some of
> > them in runtime.
> > - Templatized data structures will likely introduce additional metal
> > burdens to developers and are not really suitable as a core data
> structure
> >- Because they imply an explosive number of possible data structures,
> > while the core data structure should be a single one.
> >
> > Now my view(as an MXNet PMC member) on typed vs type-erased style: If
> MXNet
> > is a pure C++ project, I might take more of the typed approach.
> > However, MXNet itself is a project that takes python/scala/cl

Re: Python2 End of Life

2019-05-15 Thread Zach Kimberg
The website I listed earlier (https://python3statement.org/) is backed by a
git repository (
https://github.com/python3statement/python3statement.github.io) so that
projects can open a PR to add themselves to the list. Beyond that, they
also have a very nice timeline that projects can add themselves to which
details when their support ends. This might be a good first place to check
for knowing which dependencies might affect us. Here are some of the
notable projects and their support that are in the timeline:

Projects currently Python3 only: pandas, scikit-learn
Projects dropping support betweeen now and Jan 1: IPython, XGBoost, rpy2,
dateutil
Projects dropping support on Jan 1: CPython, Numpy, Pillow, Scipy,
matplotlib, Spyder

My hope is that following this discussion, we decide on a timeline and add
ourselves to this site as well. Does anyone disagree with the choice of Jan
1?

On Wed, May 15, 2019 at 2:40 AM Marco de Abreu 
wrote:

> +1
>
> I'd like to point out that one of our dependencies, scikit, already dropped
> support for python 2. If more dependencies drop support before 1.1.20, we
> might start running into further issues like we already did. As part of
> that decision, I'd propose to see what the detailed timelines of our
> dependencies are and then adjust our timeline accordingly.
>
> -Marco
>
> Pedro Larroy  schrieb am Mi., 15. Mai 2019,
> 00:15:
>
> > +1  Let python2 rest, let's simplify our infrastructure and need to
> > support old Python versions.
> >
> > On Mon, May 13, 2019 at 1:58 PM Jake Lee  wrote:
> > >
> > > +1 Recently I upgraded the Numpy version and found out that Pylint had
> > > false alarm on it. The Pylint fix is only available on Python3. So I
> > > changed the default python version of 'make pylint' command to python3
> > (PR
> > > haven't been merged). It's time to drop support for Python2.
> > >
> > > On Mon, May 13, 2019 at 1:37 PM Junru Shao 
> > wrote:
> > >
> > > > +1
> > > >
> > > > On Mon, May 13, 2019 at 1:34 PM Aaron Markham <
> > aaron.s.mark...@gmail.com>
> > > > wrote:
> > > >
> > > > > +1 for the pledge and to start moving things to Python 3.
> > > > > I think our installation instructions and tutorials can be updated
> to
> > > > > default to Python3 and we should update Python2-only tutorials. I
> > know
> > > > > we have a handful of those, and when I spot them, I'll create an
> > > > > issue.
> > > > > I can also look at migrating the docs build to Python 3.
> > > > > Should we add a new label for issues relating to migrating to
> > Python3?
> > > > > Cheers,
> > > > > Aaron
> > > > >
> > > > > On Mon, May 13, 2019 at 12:04 PM Zach Kimberg <
> > zachary.kimb...@gmail.com
> > > > >
> > > > > wrote:
> > > > > >
> > > > > > Right now, the official date for ending support for Python 2.7
> > (and all
> > > > > of
> > > > > > python2) is set to January 1 [1]. As part of it, a number of
> > projects
> > > > > have
> > > > > > pledged to drop support for Python2 in or before 2020 including
> > > > > Tensorflow,
> > > > > > requests, pandas, ipython, numpy, pillow, and Cython [2]. I
> > believe we
> > > > > > should also join in this pledge on python3statement.org [2]
> > because it
> > > > > > would help clean up our project and it would be difficult to
> > continue
> > > > > > supporting Python2 anyway when some of our dependencies are
> > dropping
> > > > > > support.
> > > > > >
> > > > > > As a concrete step, we should decide on a date to remove all
> > usages of
> > > > > > Python2 from our CI and consider that officially dropping
> support.
> > > > > > Following that, we can expect PRs will end up breaking support
> for
> > > > > Python2.
> > > > > > I suggest just using the same date that Python is dropping
> support
> > of
> > > > > > January 1. We may also need to update some examples or scripts
> that
> > > > were
> > > > > > written only for python2 that are around the project. Any
> thoughts?
> > > > > >
> > > > > > Zach
> > > > > >
> > > > > >
> > > > > > [1] - https://www.python.org/dev/peps/pep-0373/
> > > > > > [2] - https://python3statement.org/
> > > > >
> > > >
> >
>


Re: TensorRT blocker

2019-05-15 Thread Per da Silva
Hey,

Yup - I've @'ed you to the fix PR, would be great to get your 2c there just
to be sure it's all good.
https://github.com/apache/incubator-mxnet/pull/14960

Cheers,

Per

On Wed, May 15, 2019 at 4:14 PM Sunderland, Kellen
 wrote:

> Looks like it's merged.  Can I help with a fix Per?
>
> On May 15, 2019 3:00 AM, Per da Silva  wrote:
> Hi everyone,
>
> Could a committer please merge this PR:
> https://github.com/apache/incubator-mxnet/pull/14958
>
> It disables the TensorRT steps to unblock CI while a fix is being worked
> on.
>
> Cheers,
>
> Per
>


Re: TensorRT blocker

2019-05-15 Thread Sunderland, Kellen
Looks like it's merged.  Can I help with a fix Per?

On May 15, 2019 3:00 AM, Per da Silva  wrote:
Hi everyone,

Could a committer please merge this PR:
https://github.com/apache/incubator-mxnet/pull/14958

It disables the TensorRT steps to unblock CI while a fix is being worked on.

Cheers,

Per


Re: Python2 End of Life

2019-05-15 Thread Marco de Abreu
+1

I'd like to point out that one of our dependencies, scikit, already dropped
support for python 2. If more dependencies drop support before 1.1.20, we
might start running into further issues like we already did. As part of
that decision, I'd propose to see what the detailed timelines of our
dependencies are and then adjust our timeline accordingly.

-Marco

Pedro Larroy  schrieb am Mi., 15. Mai 2019,
00:15:

> +1  Let python2 rest, let's simplify our infrastructure and need to
> support old Python versions.
>
> On Mon, May 13, 2019 at 1:58 PM Jake Lee  wrote:
> >
> > +1 Recently I upgraded the Numpy version and found out that Pylint had
> > false alarm on it. The Pylint fix is only available on Python3. So I
> > changed the default python version of 'make pylint' command to python3
> (PR
> > haven't been merged). It's time to drop support for Python2.
> >
> > On Mon, May 13, 2019 at 1:37 PM Junru Shao 
> wrote:
> >
> > > +1
> > >
> > > On Mon, May 13, 2019 at 1:34 PM Aaron Markham <
> aaron.s.mark...@gmail.com>
> > > wrote:
> > >
> > > > +1 for the pledge and to start moving things to Python 3.
> > > > I think our installation instructions and tutorials can be updated to
> > > > default to Python3 and we should update Python2-only tutorials. I
> know
> > > > we have a handful of those, and when I spot them, I'll create an
> > > > issue.
> > > > I can also look at migrating the docs build to Python 3.
> > > > Should we add a new label for issues relating to migrating to
> Python3?
> > > > Cheers,
> > > > Aaron
> > > >
> > > > On Mon, May 13, 2019 at 12:04 PM Zach Kimberg <
> zachary.kimb...@gmail.com
> > > >
> > > > wrote:
> > > > >
> > > > > Right now, the official date for ending support for Python 2.7
> (and all
> > > > of
> > > > > python2) is set to January 1 [1]. As part of it, a number of
> projects
> > > > have
> > > > > pledged to drop support for Python2 in or before 2020 including
> > > > Tensorflow,
> > > > > requests, pandas, ipython, numpy, pillow, and Cython [2]. I
> believe we
> > > > > should also join in this pledge on python3statement.org [2]
> because it
> > > > > would help clean up our project and it would be difficult to
> continue
> > > > > supporting Python2 anyway when some of our dependencies are
> dropping
> > > > > support.
> > > > >
> > > > > As a concrete step, we should decide on a date to remove all
> usages of
> > > > > Python2 from our CI and consider that officially dropping support.
> > > > > Following that, we can expect PRs will end up breaking support for
> > > > Python2.
> > > > > I suggest just using the same date that Python is dropping support
> of
> > > > > January 1. We may also need to update some examples or scripts that
> > > were
> > > > > written only for python2 that are around the project. Any thoughts?
> > > > >
> > > > > Zach
> > > > >
> > > > >
> > > > > [1] - https://www.python.org/dev/peps/pep-0373/
> > > > > [2] - https://python3statement.org/
> > > >
> > >
>


RE: [DISCUSS] 1.5.0 Release Plan

2019-05-15 Thread Lv, Tao A
Hi dev,

We see there are several github issues [1][2][3][4] about mxnet windows build 
experience. The team is working intensively [5][6][7] on that to fix some 
problems of MKL-DNN build on windows. We hope these fixes can catch the code 
freeze and finally enter the 1.5.0 release.

The PR against mshadow (#374) was already merged and MXNet PR #14877 is under 
review - great thanks to CI team for helping on the MKL installation request. 
PR #14952 is document change according to build logic changes in PR #14877. So 
I think these two PRs should be merged simultaneously. Currently #14877 is 
experiencing a CI response problem.

Please take your time to have a look at these two PRs. Your comments and 
suggestions are highly appreciated.

Thanks,
-tao

[1] https://github.com/apache/incubator-mxnet/issues/14670 
[2] https://github.com/apache/incubator-mxnet/issues/14335  
[3] https://github.com/apache/incubator-mxnet/issues/14203 
[4] https://github.com/apache/incubator-mxnet/issues/14085  
[5] https://github.com/apache/incubator-mxnet/pull/14877 
[6] https://github.com/dmlc/mshadow/pull/374 
[7] https://github.com/apache/incubator-mxnet/pull/14952  

-Original Message-
From: Lai Wei [mailto:roywei...@gmail.com] 
Sent: Wednesday, May 15, 2019 2:57 PM
To: dev@mxnet.incubator.apache.org
Subject: Re: [DISCUSS] 1.5.0 Release Plan

Hi Anirudh,

I see there was an offline disucssion
<https://github.com/apache/incubator-mxnet/pull/14173#pullrequestreview-235846341>
and I have updated the AMP feature and your project on the release tracker 
<https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status>
,
Please let me know if you have any updates.

Hi @dev,
This is a gentle reminder that  the code freeze for 1.5.0 release is on 
05/17/2019, please let us know if you have any WIP pull requests aiming for
1.5.0 that needs attention.
Please understand we already have around 650 commits in master that need to be 
released in time. We understand TensorRT test in CI is failing and are trying 
to fix it. Meanwhile please update the tracker if there is any
change:
https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status

Thanks!

Lai


On Wed, May 8, 2019 at 11:58 AM Anirudh Subramanian 
wrote:

> Hi Sheng,
>
> I had a discussion with nvidia folks offline today (@ptrendx et. al.). 
> I strongly feel that the AMP feature should be included as part of the
> release: https://github.com/apache/incubator-mxnet/pull/14173 .
> The PR is aimed for completion for next week but reviews and RFC 
> discussions may take some time. I would request to extend the release 
> code freeze by 2 weeks.
> Also, I would like to include
>
> https://cwiki.apache.org/confluence/display/MXNET/Conversion+from+FP32
> +to+Mixed+Precision+Models
> which
> depends on the AMP PR.
> I am also aiming for adding a PR by this week end or early next week, 
> but reviews will take longer than May 17th.
>
> Anirudh
>
>
> On Mon, May 6, 2019 at 11:49 PM Sheng Zha  wrote:
>
> > Hi,
> >
> > While 1.4.1 vote on general@incubator is still on going, I’d like to 
> > propose that we start preparing 1.5.0 release.
> >
> > 1.5.0 will include changes that dates back to last year and there 
> > has
> been
> > a lot of new features and improvements in it, so it will likely time 
> > us more time to prepare than 1.4.1. I propose the following timeline:
> > - Cut release branch: release branch already cut. Will sync with 
> > master branch on 5/15/2019 EOD.
> > - Code freeze: 5/17/2019. No more changes unless the release branch 
> > is in a broken state.
> > - Tag and vote: 5/20/2019 onward.
> >
> > Lai Wei (roywei@) expressed to me offline that he’s willing to help
> drive
> > this release as release manager, and I’m happy to help again as
> committer.
> >
> > If you have features in progress that you’d like to include in 1.5.0:
> > - Add your feature to the scope:
> >
> https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+a
> nd+Status
> > - Indicate in this thread:
> >   - how confident you are about making it happen before the code freeze.
> > If not confident, provide estimate for a more manageable code freeze 
> > date so that people can discuss whether to extend the deadline or to 
> > skip one release for it.
> > - whether your PR requires more attention to make it happen.
> >
> > Thanks for your attention. Comments and suggestions are also welcome.
> >
> > -sz
>


Re: [Proposal] New operator graph for MXNet

2019-05-15 Thread Junru Shao
Hi Pedro,

I really appreciate that a diligent and talented engineer eagerly wants to
improve our system, and am very thankful that you have done so much for our
community. However, I do want to mention some points that I believe I
should mention.

While I agree with Tianqi that every design has its pros and cons, I would
love to emphasize that a *good taste* of system design is to optimize the
bottleneck, enhance expressiveness (and usability), i.e. to do what needs
doing, rather than *trivial nits* that are irrelevant to either performance
or expressiveness. Generally speaking, typed or untyped, shared_ptr or
unique_ptr, won't affect the overall performance when it comes to deep
learning workload, specially when we have an async scheduler that does good
latency hiding in MXNet - to me, these are not major issues that are worth
re-designing our entire system.

To benefit users - real-world ML practitioners, the most thing I would love
to mention is that dataflow graph-based representation is increasingly
incapable of modern neural networks, because the increasingly appeared
structures like arbitrary control flow (w/ continue, break, etc),
recursion, type conjunction and disjunction, etc. These issues will be our
priority to address, which is brought by Relay, which addresses all these
pain points.

Another minor thing I would love to humbly mention is that, for sake of our
brand, it is our responsibility to be professional about terminologies when
writing an official proposal on Confluence. As one of the numerous
examples, the title of the proposal really shocks me for a while, something
like "operators graph" blah blah so weird. Educate me if I were wrong, but
compiler community would prefer the term "intermediate representation", and
distributed system community would prefer "dataflow graph". If you don't
have knowledge in these fields, a better way for efficient communication is
to get yourself first familiarize the most basic concepts and then do
discussion. This is a way to save your own valuable time as well.

Again, thank you so much for your hard work, and hope that we could work
together to win customers in the future :-)

Thanks,
Junru


On Tue, May 14, 2019 at 8:03 PM Tianqi Chen 
wrote:

> The core part of the proposal is to move the graph to be much more strongly
> typed template class.
> I think this is mainly a point of engineering taste, and both sides have
> pros and cons, let me list them before I share my thoughts on this issue:
>
> - Typed fields certainly enjoy more compile-time type checking, on the
> other hand, it is hard to expose
>template of explosive possibilities to frontend languages.
> - More type-erased fields provide runtime flexibility to store polymorphic
> types as well as extensible attributes for graph optimization
>   - It is hard to use a virtual class to expose every possible attribute
> that an operator might have, such as inlining, storage pattern, gradient
> etc..
>   - The nature of supporting a growing set of operator attribute requires a
> type-erased attrs field.
> - In contrast to your argument(typing is a blocker to features),
> type-erased or typed code can both get to the same feature except, except
> that
>   typed code gets more compile-time errors while type-erased get some of
> them in runtime.
> - Templatized data structures will likely introduce additional metal
> burdens to developers and are not really suitable as a core data structure
>- Because they imply an explosive number of possible data structures,
> while the core data structure should be a single one.
>
> Now my view(as an MXNet PMC member) on typed vs type-erased style: If MXNet
> is a pure C++ project, I might take more of the typed approach.
> However, MXNet itself is a project that takes python/scala/clojure and
> other frontend languages.
> The introduction of more typing may not align with the original goal as the
> tradeoffs I listed above.
>
> This proposal is really a drastic change of what NNVM does, as well as the
> optimization passes, and given the scope, in your analogy, "a new vehicle
> to solve all the problems"
> rather than a minor patch. It will take a lot of engineering effort to
> bring in new features and adapting the existing ones.
> Because of that, it does merit a discussion about how shall we think about
> the future MXNet2.0.
>
> Technically Relay is a serious candidate. Of course relay, as well as its
> core, is in C++ but maintains the multi-language first principle, that is
> why the example code was in python.
> See more related discussion comparing NNVMv1 and relay:
> https://discuss.tvm.ai/t/any-materials-of-relay-for-beginners/2392/5
>
> I think the ideal graph data structure candidate for MXNet2.0 should have
> natural support for:
> - Native support of f

Re: [DISCUSS] 1.5.0 Release Plan

2019-05-15 Thread Lai Wei
Hi Anirudh,

I see there was an offline disucssion

and I have updated the AMP feature and your project on the release tracker

,
Please let me know if you have any updates.

Hi @dev,
This is a gentle reminder that  the code freeze for 1.5.0 release is on
05/17/2019, please let us know if you have any WIP pull requests aiming for
1.5.0 that needs attention.
Please understand we already have around 650 commits in master that need to
be released in time. We understand TensorRT test in CI is failing and are
trying to fix it. Meanwhile please update the tracker if there is any
change:
https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status

Thanks!

Lai


On Wed, May 8, 2019 at 11:58 AM Anirudh Subramanian 
wrote:

> Hi Sheng,
>
> I had a discussion with nvidia folks offline today (@ptrendx et. al.). I
> strongly feel that the AMP feature should be included as part of the
> release: https://github.com/apache/incubator-mxnet/pull/14173 .
> The PR is aimed for completion for next week but reviews and RFC
> discussions may take some time. I would request to extend the release code
> freeze by 2 weeks.
> Also, I would like to include
>
> https://cwiki.apache.org/confluence/display/MXNET/Conversion+from+FP32+to+Mixed+Precision+Models
> which
> depends on the AMP PR.
> I am also aiming for adding a PR by this week end or early next week, but
> reviews will take longer than May 17th.
>
> Anirudh
>
>
> On Mon, May 6, 2019 at 11:49 PM Sheng Zha  wrote:
>
> > Hi,
> >
> > While 1.4.1 vote on general@incubator is still on going, I’d like to
> > propose that we start preparing 1.5.0 release.
> >
> > 1.5.0 will include changes that dates back to last year and there has
> been
> > a lot of new features and improvements in it, so it will likely time us
> > more time to prepare than 1.4.1. I propose the following timeline:
> > - Cut release branch: release branch already cut. Will sync with master
> > branch on 5/15/2019 EOD.
> > - Code freeze: 5/17/2019. No more changes unless the release branch is in
> > a broken state.
> > - Tag and vote: 5/20/2019 onward.
> >
> > Lai Wei (roywei@) expressed to me offline that he’s willing to help
> drive
> > this release as release manager, and I’m happy to help again as
> committer.
> >
> > If you have features in progress that you’d like to include in 1.5.0:
> > - Add your feature to the scope:
> >
> https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status
> > - Indicate in this thread:
> >   - how confident you are about making it happen before the code freeze.
> > If not confident, provide estimate for a more manageable code freeze date
> > so that people can discuss whether to extend the deadline or to skip one
> > release for it.
> > - whether your PR requires more attention to make it happen.
> >
> > Thanks for your attention. Comments and suggestions are also welcome.
> >
> > -sz
>


Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Tianqi Chen
The core part of the proposal is to move the graph to be much more strongly
typed template class.
I think this is mainly a point of engineering taste, and both sides have
pros and cons, let me list them before I share my thoughts on this issue:

- Typed fields certainly enjoy more compile-time type checking, on the
other hand, it is hard to expose
   template of explosive possibilities to frontend languages.
- More type-erased fields provide runtime flexibility to store polymorphic
types as well as extensible attributes for graph optimization
  - It is hard to use a virtual class to expose every possible attribute
that an operator might have, such as inlining, storage pattern, gradient
etc..
  - The nature of supporting a growing set of operator attribute requires a
type-erased attrs field.
- In contrast to your argument(typing is a blocker to features),
type-erased or typed code can both get to the same feature except, except
that
  typed code gets more compile-time errors while type-erased get some of
them in runtime.
- Templatized data structures will likely introduce additional metal
burdens to developers and are not really suitable as a core data structure
   - Because they imply an explosive number of possible data structures,
while the core data structure should be a single one.

Now my view(as an MXNet PMC member) on typed vs type-erased style: If MXNet
is a pure C++ project, I might take more of the typed approach.
However, MXNet itself is a project that takes python/scala/clojure and
other frontend languages.
The introduction of more typing may not align with the original goal as the
tradeoffs I listed above.

This proposal is really a drastic change of what NNVM does, as well as the
optimization passes, and given the scope, in your analogy, "a new vehicle
to solve all the problems"
rather than a minor patch. It will take a lot of engineering effort to
bring in new features and adapting the existing ones.
Because of that, it does merit a discussion about how shall we think about
the future MXNet2.0.

Technically Relay is a serious candidate. Of course relay, as well as its
core, is in C++ but maintains the multi-language first principle, that is
why the example code was in python.
See more related discussion comparing NNVMv1 and relay:
https://discuss.tvm.ai/t/any-materials-of-relay-for-beginners/2392/5

I think the ideal graph data structure candidate for MXNet2.0 should have
natural support for:
- Native support of function, module, and recursions
- Control flows
- The ability of interpolation with multi-language frontend, e.g. being
able to prototype graph optimizations in python/scala/clojure if needed.

Adding these support needs significant engineering effort, and I do hope we
only have to do it once. While I don't want to force any conclusion here,
I do think Relay is one such candidate.

Tianqi


On Tue, May 14, 2019 at 5:58 PM Pedro Larroy 
wrote:

> Hi Tianqi
>
> Thanks for the quick response.
>
> Could you point to examples where graph.h is being exposed which would
> not be possible with what I propose? I don't think my proposal is
> having any impact in language bindings, and the way I describe it
> doesn't affect having or not having higher language bindings. Please
> elaborate so I can understand your concern.  Maybe code examples where
> the graph attributes are being changed from Python?  I don't think we
> have this on MXNet. This is such a core foundation for MXNet, that I
> don't think we should compromise on it because other project not
> directly related to MXNet might want to expose some untyped graph and
> Node attributes.  The current status makes maintaining the code very
> painful and also is preventing desired features such as higher order
> gradients to be developed. I have heard from you many times how speed
> is critical for us to innovate in this quickly changing field.
>
> My proposal is limited to the graph and wouldn't change the way
> operators are registered and arguments are processed for operators for
> example.
>
>
> Regarding the second point, the documentation about Relay in the web
> which I found for example:
>
> https://docs.tvm.ai/dev/relay_add_op.html#
>
> Is somebody working on making Imperative::Backward use this API? this
> would be a big change which I'm not aware of. And using an IR is of a
> much bigger scope than the change I'm proposing here for example.
>
> I think I'm having difficulty understanding what are the arguments
> here. I'm saying I need to change one piece of my car and what you are
> selling me is a new vehicle here?  Or your suggestion that we use
> Relay for the graph passes in MXNet?
>
> I would like to see C++ code examples, Python examples are not
> sufficient when we talk about the core MXNet.
>
> Pedro.
>
>
>
>
>
>
> On Tue, May 14, 2019 at 5:39 PM Tianqi Chen 
> wrote:
> >
> > Thanks for the proposal. Let me share some of my thoughts:
> >
> > Specific comments on the proposal
> > 

Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Pedro Larroy
Hi Tianqi

I thought a bit more about your comments and I think there is a simple
way to address your concerns that satisfies both needs.

We can have a NodeAttributes template class which has a map of string
to any as it's currenlty the case, so the graph can be used in the
highly dynamic scenario that you are concerned about.

Let me know what you think.

Pedro.


On Tue, May 14, 2019 at 5:50 PM Pedro Larroy
 wrote:
>
> Hi Tianqi
>
> Thanks for the quick response.
>
> Could you point to examples where graph.h is being exposed which would
> not be possible with what I propose? I don't think my proposal is
> having any impact in language bindings, and the way I describe it
> doesn't affect having or not having higher language bindings. Please
> elaborate so I can understand your concern.  Maybe code examples where
> the graph attributes are being changed from Python?  I don't think we
> have this on MXNet. This is such a core foundation for MXNet, that I
> don't think we should compromise on it because other project not
> directly related to MXNet might want to expose some untyped graph and
> Node attributes.  The current status makes maintaining the code very
> painful and also is preventing desired features such as higher order
> gradients to be developed. I have heard from you many times how speed
> is critical for us to innovate in this quickly changing field.
>
> My proposal is limited to the graph and wouldn't change the way
> operators are registered and arguments are processed for operators for
> example.
>
>
> Regarding the second point, the documentation about Relay in the web
> which I found for example:
>
> https://docs.tvm.ai/dev/relay_add_op.html#
>
> Is somebody working on making Imperative::Backward use this API? this
> would be a big change which I'm not aware of. And using an IR is of a
> much bigger scope than the change I'm proposing here for example.
>
> I think I'm having difficulty understanding what are the arguments
> here. I'm saying I need to change one piece of my car and what you are
> selling me is a new vehicle here?  Or your suggestion that we use
> Relay for the graph passes in MXNet?
>
> I would like to see C++ code examples, Python examples are not
> sufficient when we talk about the core MXNet.
>
> Pedro.
>
>
>
>
>
>
> On Tue, May 14, 2019 at 5:39 PM Tianqi Chen  wrote:
> >
> > Thanks for the proposal. Let me share some of my thoughts:
> >
> > Specific comments on the proposal
> > ---
> > The heavy use of generic in the Graph type was a huge departure from
> > type-erased data structure which was presented in the previous design.
> > While we understand the advantage of typed language(more compile-time
> > checking) and type-erased types(more dynamism) the heavy use of
> > the template will actually make the project solely C++ focused, making it
> > hard to expose intermediate(templatized) data structure to
> > other languages like python/scala/clojure.
> >
> > While I fully understand some of the lessons taught in programming
> > C++(reduce shared_ptr, more typing etc.)
> > We need to think about the context of MXNet project and **the need to
> > support multi-language as a first-class**.
> > Some of the type-erased types are design trade-offs made to support these
> > features, and we need to think more
> > carefully instead of just applying "rules for C++" which may bring problems.
> >
> > Future of NNVM
> > --
> > Given that this thread touched upon what we should do for better
> > computational graph handling. I would recommend also to take a look at
> > NNVMv2 -- relay.
> >
> > Relay addresses many of the wish-lists in the proposal already, such as
> > operator fusion, high order gradient, offload to hardware, isolated
> > compilation, deployment on edge and accelerators etc.
> > Relay also address problems not yet being mentioned in the proposal,
> > including control flow and dynamic runtime, automatic layout optimization
> > etc.
> >
> > Tianqi
> >
> > On Tue, May 14, 2019 at 5:06 PM Sheng Zha  wrote:
> >
> > > Hi Pedro,
> > >
> > > Thanks for taking the inititaive. Skimming through the design doc, I
> > > didn't see comparison with existing solutions such as relay in tvm, which
> > > is already a dependency of mxnet already. Could you elaborate on 
> > > comparison
> > > with existing solutions in the design doc too?
> > >
> > > -sz
> > >
> > > On 2019/05/14 23:49:30, Pedro Larroy 
> > > wrote:
> > > > Hi dev@
> > > >
> > > > As a result of my deep dives on the graph machinery I have created a
> > > > new proposal to improve the operator graph in MXNet.
> > > >
> > > > This would mean superseding the use of NNVM Graph in MXNet and having
> > > > a new implementation that we can use to simplify a lot of code and do
> > > > powerful graph manipulation and passes such as operator fusion and
> > > > other optimizations.
> > > >
> > > > As it would be a change with big impact and ramifications, your
> 

Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Pedro Larroy
Hi Tianqi

Thanks for the quick response.

Could you point to examples where graph.h is being exposed which would
not be possible with what I propose? I don't think my proposal is
having any impact in language bindings, and the way I describe it
doesn't affect having or not having higher language bindings. Please
elaborate so I can understand your concern.  Maybe code examples where
the graph attributes are being changed from Python?  I don't think we
have this on MXNet. This is such a core foundation for MXNet, that I
don't think we should compromise on it because other project not
directly related to MXNet might want to expose some untyped graph and
Node attributes.  The current status makes maintaining the code very
painful and also is preventing desired features such as higher order
gradients to be developed. I have heard from you many times how speed
is critical for us to innovate in this quickly changing field.

My proposal is limited to the graph and wouldn't change the way
operators are registered and arguments are processed for operators for
example.


Regarding the second point, the documentation about Relay in the web
which I found for example:

https://docs.tvm.ai/dev/relay_add_op.html#

Is somebody working on making Imperative::Backward use this API? this
would be a big change which I'm not aware of. And using an IR is of a
much bigger scope than the change I'm proposing here for example.

I think I'm having difficulty understanding what are the arguments
here. I'm saying I need to change one piece of my car and what you are
selling me is a new vehicle here?  Or your suggestion that we use
Relay for the graph passes in MXNet?

I would like to see C++ code examples, Python examples are not
sufficient when we talk about the core MXNet.

Pedro.






On Tue, May 14, 2019 at 5:39 PM Tianqi Chen  wrote:
>
> Thanks for the proposal. Let me share some of my thoughts:
>
> Specific comments on the proposal
> ---
> The heavy use of generic in the Graph type was a huge departure from
> type-erased data structure which was presented in the previous design.
> While we understand the advantage of typed language(more compile-time
> checking) and type-erased types(more dynamism) the heavy use of
> the template will actually make the project solely C++ focused, making it
> hard to expose intermediate(templatized) data structure to
> other languages like python/scala/clojure.
>
> While I fully understand some of the lessons taught in programming
> C++(reduce shared_ptr, more typing etc.)
> We need to think about the context of MXNet project and **the need to
> support multi-language as a first-class**.
> Some of the type-erased types are design trade-offs made to support these
> features, and we need to think more
> carefully instead of just applying "rules for C++" which may bring problems.
>
> Future of NNVM
> --
> Given that this thread touched upon what we should do for better
> computational graph handling. I would recommend also to take a look at
> NNVMv2 -- relay.
>
> Relay addresses many of the wish-lists in the proposal already, such as
> operator fusion, high order gradient, offload to hardware, isolated
> compilation, deployment on edge and accelerators etc.
> Relay also address problems not yet being mentioned in the proposal,
> including control flow and dynamic runtime, automatic layout optimization
> etc.
>
> Tianqi
>
> On Tue, May 14, 2019 at 5:06 PM Sheng Zha  wrote:
>
> > Hi Pedro,
> >
> > Thanks for taking the inititaive. Skimming through the design doc, I
> > didn't see comparison with existing solutions such as relay in tvm, which
> > is already a dependency of mxnet already. Could you elaborate on comparison
> > with existing solutions in the design doc too?
> >
> > -sz
> >
> > On 2019/05/14 23:49:30, Pedro Larroy 
> > wrote:
> > > Hi dev@
> > >
> > > As a result of my deep dives on the graph machinery I have created a
> > > new proposal to improve the operator graph in MXNet.
> > >
> > > This would mean superseding the use of NNVM Graph in MXNet and having
> > > a new implementation that we can use to simplify a lot of code and do
> > > powerful graph manipulation and passes such as operator fusion and
> > > other optimizations.
> > >
> > > As it would be a change with big impact and ramifications, your
> > > thoughts and feedback on the document would be highly appreciated so
> > > we can take potential future interesting use cases:
> > >
> > >
> > https://cwiki.apache.org/confluence/display/MXNET/MXVM%3A+Operator+graph+2.0
> > >
> > > Pedro.
> > >
> >


Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Tianqi Chen
Thanks for the proposal. Let me share some of my thoughts:

Specific comments on the proposal
---
The heavy use of generic in the Graph type was a huge departure from
type-erased data structure which was presented in the previous design.
While we understand the advantage of typed language(more compile-time
checking) and type-erased types(more dynamism) the heavy use of
the template will actually make the project solely C++ focused, making it
hard to expose intermediate(templatized) data structure to
other languages like python/scala/clojure.

While I fully understand some of the lessons taught in programming
C++(reduce shared_ptr, more typing etc.)
We need to think about the context of MXNet project and **the need to
support multi-language as a first-class**.
Some of the type-erased types are design trade-offs made to support these
features, and we need to think more
carefully instead of just applying "rules for C++" which may bring problems.

Future of NNVM
--
Given that this thread touched upon what we should do for better
computational graph handling. I would recommend also to take a look at
NNVMv2 -- relay.

Relay addresses many of the wish-lists in the proposal already, such as
operator fusion, high order gradient, offload to hardware, isolated
compilation, deployment on edge and accelerators etc.
Relay also address problems not yet being mentioned in the proposal,
including control flow and dynamic runtime, automatic layout optimization
etc.

Tianqi

On Tue, May 14, 2019 at 5:06 PM Sheng Zha  wrote:

> Hi Pedro,
>
> Thanks for taking the inititaive. Skimming through the design doc, I
> didn't see comparison with existing solutions such as relay in tvm, which
> is already a dependency of mxnet already. Could you elaborate on comparison
> with existing solutions in the design doc too?
>
> -sz
>
> On 2019/05/14 23:49:30, Pedro Larroy 
> wrote:
> > Hi dev@
> >
> > As a result of my deep dives on the graph machinery I have created a
> > new proposal to improve the operator graph in MXNet.
> >
> > This would mean superseding the use of NNVM Graph in MXNet and having
> > a new implementation that we can use to simplify a lot of code and do
> > powerful graph manipulation and passes such as operator fusion and
> > other optimizations.
> >
> > As it would be a change with big impact and ramifications, your
> > thoughts and feedback on the document would be highly appreciated so
> > we can take potential future interesting use cases:
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/MXVM%3A+Operator+graph+2.0
> >
> > Pedro.
> >
>


Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Pedro Larroy
Hi Sheng

Could  you provide relevant links to Relay and what you would
recommend to read so we have a focused discussion instead of me
potentially me miss-searching? Probably I also missed the discussion
or vote in the mail list regarding including TVM as a depedency or
future plans on using Relay.
As far as I know, we have TVM as a dependency because NNVM was
assimilated into it but we are not using it directly.  Is this
correct?

This would help me to add this information to the doc as you request.

Thanks.

Pedro.

On Tue, May 14, 2019 at 5:06 PM Sheng Zha  wrote:
>
> Hi Pedro,
>
> Thanks for taking the inititaive. Skimming through the design doc, I didn't 
> see comparison with existing solutions such as relay in tvm, which is already 
> a dependency of mxnet already. Could you elaborate on comparison with 
> existing solutions in the design doc too?
>
> -sz
>
> On 2019/05/14 23:49:30, Pedro Larroy  wrote:
> > Hi dev@
> >
> > As a result of my deep dives on the graph machinery I have created a
> > new proposal to improve the operator graph in MXNet.
> >
> > This would mean superseding the use of NNVM Graph in MXNet and having
> > a new implementation that we can use to simplify a lot of code and do
> > powerful graph manipulation and passes such as operator fusion and
> > other optimizations.
> >
> > As it would be a change with big impact and ramifications, your
> > thoughts and feedback on the document would be highly appreciated so
> > we can take potential future interesting use cases:
> >
> > https://cwiki.apache.org/confluence/display/MXNET/MXVM%3A+Operator+graph+2.0
> >
> > Pedro.
> >


Re: [Proposal] New operator graph for MXNet

2019-05-14 Thread Sheng Zha
Hi Pedro,

Thanks for taking the inititaive. Skimming through the design doc, I didn't see 
comparison with existing solutions such as relay in tvm, which is already a 
dependency of mxnet already. Could you elaborate on comparison with existing 
solutions in the design doc too?

-sz

On 2019/05/14 23:49:30, Pedro Larroy  wrote: 
> Hi dev@
> 
> As a result of my deep dives on the graph machinery I have created a
> new proposal to improve the operator graph in MXNet.
> 
> This would mean superseding the use of NNVM Graph in MXNet and having
> a new implementation that we can use to simplify a lot of code and do
> powerful graph manipulation and passes such as operator fusion and
> other optimizations.
> 
> As it would be a change with big impact and ramifications, your
> thoughts and feedback on the document would be highly appreciated so
> we can take potential future interesting use cases:
> 
> https://cwiki.apache.org/confluence/display/MXNET/MXVM%3A+Operator+graph+2.0
> 
> Pedro.
> 


Re: assimilation of mshadow into the MXNet codebase

2019-05-14 Thread Pedro Larroy
Hi Sheng.

Do you need some help with this?  Do we plan to have this for 1.5?

Pedro.

On Wed, Apr 24, 2019 at 4:26 PM Pedro Larroy
 wrote:
>
> Thanks. Great to read.
>
> On Wed, Apr 24, 2019 at 2:19 PM Sheng Zha  wrote:
> >
> > The community has agreed to donate mshadow to the mxnet code base. I will 
> > start the migration and build logic changes soon.
> >
> > -sz
> >
> > On 2019/04/07 21:47:39, Sheng Zha  wrote:
> > > I agree it would make development easier to donate mshadow to mxnet code 
> > > base, since mshadow is only used in MXNet. I support donating the mshadow 
> > > code to mxnet and I started an RFC for this in mshadow [1].
> > >
> > > [1] https://github.com/dmlc/mshadow/issues/373
> > >
> > > -sz
> > >
> > > On 2019/04/06 04:38:19, Tianqi Chen  wrote:
> > > > Technically, mshadow is sufficient for MXNet. Adopting other libraries (
> > > > eigen or xtensor) will unnecessarily increase the codebase complexity
> > > > without any additional gains.
> > > >
> > > > Given that mshadow is only used by mxnet. I do support donating it into
> > > > mxnet codebase.
> > > > To respect the original mshadow community. I would recommend starting a
> > > > community RFC In the mshadow github issue for a week, before we start 
> > > > the
> > > > migrating process.
> > > > Also, I would recommend a rebase merge just like the case of MXNet.jl 
> > > > code
> > > > base to preserve the contribution history.
> > > >
> > > > Tianqi
> > > >
> > > >
> > > > On Fri, Apr 5, 2019 at 9:25 PM Alfredo Luque
> > > >  wrote:
> > > >
> > > > > Do you have a link to both of these proposals?
> > > > >
> > > > > On Fri, Apr 5, 2019 at 20:14 Anirudh Acharya 
> > > > > wrote:
> > > > >
> > > > > > Hi Pedro,
> > > > > >
> > > > > > mshadow is mostly used for tensor arithmetic. There have been 
> > > > > > discussions
> > > > > > about including it within mxnet. I think it is a good idea.
> > > > > >
> > > > > > As a more long term solution using libraries like eigen to perform 
> > > > > > linear
> > > > > > algebra operations was also suggested by anirudh2290@. I think 
> > > > > > xtensor(
> > > > > > https://github.com/QuantStack/xtensor ) can also be a candidate 
> > > > > > here.
> > > > > >
> > > > > > -
> > > > > > Anirudh
> > > > > >
> > > > > >
> > > > > > On Fri, Apr 5, 2019 at 7:03 PM Pedro Larroy <
> > > > > pedro.larroy.li...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi
> > > > > > >
> > > > > > > Some developers have noticed that working in mshadow is 
> > > > > > > cumbersome as
> > > > > > > it's a 3rdparty subrepo.
> > > > > > >
> > > > > > > Since mshadow is a bunch of headers which don't have much of
> > > > > > > independent tests / library functionality, me and other developers
> > > > > > > believe that it would be good to assimilate this code in the
> > > > > > > repository for ease of contribution and changes without having to 
> > > > > > > go
> > > > > > > trough contortions to test PRs that modify mshadow.
> > > > > > >
> > > > > > > Would anybody oppose this change?
> > > > > > >
> > > > > > > Thanks and have a nice weekend.
> > > > > > >
> > > > > > > Pedro.
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >


Re: Python2 End of Life

2019-05-14 Thread Pedro Larroy
+1  Let python2 rest, let's simplify our infrastructure and need to
support old Python versions.

On Mon, May 13, 2019 at 1:58 PM Jake Lee  wrote:
>
> +1 Recently I upgraded the Numpy version and found out that Pylint had
> false alarm on it. The Pylint fix is only available on Python3. So I
> changed the default python version of 'make pylint' command to python3 (PR
> haven't been merged). It's time to drop support for Python2.
>
> On Mon, May 13, 2019 at 1:37 PM Junru Shao  wrote:
>
> > +1
> >
> > On Mon, May 13, 2019 at 1:34 PM Aaron Markham 
> > wrote:
> >
> > > +1 for the pledge and to start moving things to Python 3.
> > > I think our installation instructions and tutorials can be updated to
> > > default to Python3 and we should update Python2-only tutorials. I know
> > > we have a handful of those, and when I spot them, I'll create an
> > > issue.
> > > I can also look at migrating the docs build to Python 3.
> > > Should we add a new label for issues relating to migrating to Python3?
> > > Cheers,
> > > Aaron
> > >
> > > On Mon, May 13, 2019 at 12:04 PM Zach Kimberg  > >
> > > wrote:
> > > >
> > > > Right now, the official date for ending support for Python 2.7 (and all
> > > of
> > > > python2) is set to January 1 [1]. As part of it, a number of projects
> > > have
> > > > pledged to drop support for Python2 in or before 2020 including
> > > Tensorflow,
> > > > requests, pandas, ipython, numpy, pillow, and Cython [2]. I believe we
> > > > should also join in this pledge on python3statement.org [2] because it
> > > > would help clean up our project and it would be difficult to continue
> > > > supporting Python2 anyway when some of our dependencies are dropping
> > > > support.
> > > >
> > > > As a concrete step, we should decide on a date to remove all usages of
> > > > Python2 from our CI and consider that officially dropping support.
> > > > Following that, we can expect PRs will end up breaking support for
> > > Python2.
> > > > I suggest just using the same date that Python is dropping support of
> > > > January 1. We may also need to update some examples or scripts that
> > were
> > > > written only for python2 that are around the project. Any thoughts?
> > > >
> > > > Zach
> > > >
> > > >
> > > > [1] - https://www.python.org/dev/peps/pep-0373/
> > > > [2] - https://python3statement.org/
> > >
> >


Re: [INVITATION] 14th of May 2019 / Berlin MXNet Recurring User Group Meeting

2019-05-14 Thread Per da Silva
Hey Wen-Yang Chu,

Unfortunately, my talents are more on the CI/CD at the moment, so I don't
know that I'll be able to answer your question.

Is there anyone out there than can join us and shine some light in the
situation?

If no one is able to join, I'll try to understand your question and find
someone who knows the answer.

Mad props to you for bringing MXNet to Phillips and your startup!!

Cheers,

Per

On Tue., 14 May 2019, 3:21 pm Wen-Yang Chu,  wrote:

> Hi  Per da Silva,
>
> I would like to join this meeting. I would like to ask about some solution
> of how to replace the depreciated "crop" layer properly.
> I found many have the same issue and I do not find a proper solution. It
> can be a real deal breaker for me despite I am really fond
> of MXNET and use it in Philips for product development for more than 2
> years. I am opening a startup and hope to continue using mxnet.
> I am from Belgium by the way. See you today at 7PM
>
> Best regards,
>
> Wen-Yang
>
> On Tue, May 14, 2019 at 1:52 PM Per da Silva  wrote:
>
> > Dear MXNet community,
> >
> > I would like to invite you to the regular Apache MXNet (Incubating) User
> > Group meeting on the 19th of March 2019 [1].
> >
> > As usually, the meeting will have remote VC, powered by Amazon Chime [2].
> >
> > Due to availability, **TODAY** It will be held from* 7pm-8pm (CEST) /
> > 10am-11am (PST).* One hour later than usual.
> >
> > Join the meeting:
> >
> > https://chime.aws/2671929429
> >
> > Meeting ID: 2671929429
> >
> > Looking forward to meet you there.
> >
> > Best
> > Per
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28Incubating%29+
> > User+Groups+recurring+meetings
> > [2] https://chime.aws/
> >
>


Re: [INVITATION] 14th of May 2019 / Berlin MXNet Recurring User Group Meeting

2019-05-14 Thread Wen-Yang Chu
Hi  Per da Silva,

I would like to join this meeting. I would like to ask about some solution
of how to replace the depreciated "crop" layer properly.
I found many have the same issue and I do not find a proper solution. It
can be a real deal breaker for me despite I am really fond
of MXNET and use it in Philips for product development for more than 2
years. I am opening a startup and hope to continue using mxnet.
I am from Belgium by the way. See you today at 7PM

Best regards,

Wen-Yang

On Tue, May 14, 2019 at 1:52 PM Per da Silva  wrote:

> Dear MXNet community,
>
> I would like to invite you to the regular Apache MXNet (Incubating) User
> Group meeting on the 19th of March 2019 [1].
>
> As usually, the meeting will have remote VC, powered by Amazon Chime [2].
>
> Due to availability, **TODAY** It will be held from* 7pm-8pm (CEST) /
> 10am-11am (PST).* One hour later than usual.
>
> Join the meeting:
>
> https://chime.aws/2671929429
>
> Meeting ID: 2671929429
>
> Looking forward to meet you there.
>
> Best
> Per
>
> [1]
>
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28Incubating%29+
> User+Groups+recurring+meetings
> [2] https://chime.aws/
>


Re: [Proposal] MXNet operator benchmark library

2019-05-13 Thread Marco de Abreu
Great proposal!

sandeep krishnamurthy  schrieb am Di., 14. Mai
2019, 04:45:

> Hi Naveen,
>
> Thanks for your feedback and suggestions. I have updated the document
> addressing the feedback and concerns, alternate solutions pros/cons are
> added.
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Operator+Benchmarks
>
> Best,
> Sandeep
>
> On Mon, May 13, 2019 at 1:20 PM Naveen Swamy  wrote:
>
> > Sandeep,
> >
> > Thanks for initiating work on individual operator performance. However I
> > find the proposed approach(ie., a separate libary/framework) to
> unnecessary
> > and increases maintenance overhead for the project.
> > Also, have you considered alternate approaches to achieve the same goal.?
> >
> > Many of the requirements/motivations you have mentioned typically should
> be
> > covered in unit-tests(different data-types/ different dimensions), so
> > instead of having to rewrite for all operators measuring performance,
> > consider writing a @timeit routine(using Python decorators) which can be
> > called on individual unit tests.  Also even if you call the performance
> > script from Python, typically you want to measure as close to the kernel
> as
> > possible and avoid any other variables.
> >
> > I left some comments on the doc itself.
> >
> > Happy to discuss further.
> >
> > -Naveen
> >
> >
> > On Mon, Apr 29, 2019 at 1:57 PM sandeep krishnamurthy <
> > sandeep.krishn...@gmail.com> wrote:
> >
> > > Hello Community,
> > >
> > > I am currently working on building a utility/library to help us easily
> do
> > > individual operator benchmarking in MXNet. I have documented the
> proposal
> > > in
> > > this cwiki
> > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Operator+Benchmarks
> > > >,
> > > and staging the current development in this github repository
> > > .
> > Proposal
> > > is to get this library under incubator-mxnet/benchmark/
> > > .
> > Please
> > > do review and provide your feedback and suggestions.
> > >
> > > Thanks to fellow MXNet community members - Lin, Sam, Rohit for
> providing
> > > initial ideas and suggestion.
> > >
> > > Best,
> > > Sandeep
> > >
> > >
> > >
> > >
> > > --
> > > Sandeep Krishnamurthy
> > >
> >
>
>
> --
> Sandeep Krishnamurthy
>


Re: [Proposal] MXNet operator benchmark library

2019-05-13 Thread sandeep krishnamurthy
Hi Naveen,

Thanks for your feedback and suggestions. I have updated the document
addressing the feedback and concerns, alternate solutions pros/cons are
added.
https://cwiki.apache.org/confluence/display/MXNET/MXNet+Operator+Benchmarks

Best,
Sandeep

On Mon, May 13, 2019 at 1:20 PM Naveen Swamy  wrote:

> Sandeep,
>
> Thanks for initiating work on individual operator performance. However I
> find the proposed approach(ie., a separate libary/framework) to unnecessary
> and increases maintenance overhead for the project.
> Also, have you considered alternate approaches to achieve the same goal.?
>
> Many of the requirements/motivations you have mentioned typically should be
> covered in unit-tests(different data-types/ different dimensions), so
> instead of having to rewrite for all operators measuring performance,
> consider writing a @timeit routine(using Python decorators) which can be
> called on individual unit tests.  Also even if you call the performance
> script from Python, typically you want to measure as close to the kernel as
> possible and avoid any other variables.
>
> I left some comments on the doc itself.
>
> Happy to discuss further.
>
> -Naveen
>
>
> On Mon, Apr 29, 2019 at 1:57 PM sandeep krishnamurthy <
> sandeep.krishn...@gmail.com> wrote:
>
> > Hello Community,
> >
> > I am currently working on building a utility/library to help us easily do
> > individual operator benchmarking in MXNet. I have documented the proposal
> > in
> > this cwiki
> > <
> >
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Operator+Benchmarks
> > >,
> > and staging the current development in this github repository
> > .
> Proposal
> > is to get this library under incubator-mxnet/benchmark/
> > .
> Please
> > do review and provide your feedback and suggestions.
> >
> > Thanks to fellow MXNet community members - Lin, Sam, Rohit for providing
> > initial ideas and suggestion.
> >
> > Best,
> > Sandeep
> >
> >
> >
> >
> > --
> > Sandeep Krishnamurthy
> >
>


-- 
Sandeep Krishnamurthy


Re: [Announcement] New Committer - Zach Kimberg

2019-05-13 Thread Pedro Larroy
Congratulations

On Thu, May 9, 2019 at 11:29 AM Chaitanya Bapat  wrote:
>
> Congratulations Zachary! Way to go!
>
> On Thu, 9 May 2019 at 14:01, Carin Meier  wrote:
>
> > Congrats!
> >
> > On Thu, May 9, 2019 at 1:41 PM Per da Silva  wrote:
> >
> > > Nice one! Congratulations =)
> > >
> > > On Thu, May 9, 2019 at 7:38 PM Jake Lee  wrote:
> > >
> > > > Congrat!
> > > >
> > > > On Thu, May 9, 2019 at 10:37 AM Yuan Tang 
> > > wrote:
> > > >
> > > > > Welcome!
> > > > >
> > > > > On Thu, May 9, 2019 at 1:36 PM Marco de Abreu <
> > marco.g.ab...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Welcome!
> > > > > >
> > > > > > Hagay Lupesko  schrieb am Do., 9. Mai 2019,
> > > 19:33:
> > > > > >
> > > > > > > Congratulations Zach - well deserved!
> > > > > > >
> > > > > > > On Thu, May 9, 2019, 13:26 Qing Lan  wrote:
> > > > > > >
> > > > > > > > Hi All,
> > > > > > > >
> > > > > > > > Please join me in welcoming Zach Kimberg (
> > > > https://github.com/zachgk)
> > > > > > as
> > > > > > > a
> > > > > > > > new committer.
> > > > > > > >
> > > > > > > > He has been solving some important bugs in MXNet JVM with
> > respect
> > > > to
> > > > > > > usage
> > > > > > > > improvement, build issues and a lot more. He also created the
> > > > Jenkins
> > > > > > > based
> > > > > > > > publish pipeline for us to have standard way to build and test
> > > > > > > > static-linked package conveniently for everyone in the
> > community.
> > > > > > > Moreover,
> > > > > > > > he solved a bunch of License problems we have in MXNet and
> > > brought
> > > > > > > several
> > > > > > > > fixes to let us get 1.4.0 release on time.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Qing
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
> --
> *Chaitanya Prakash Bapat*
> *+1 (973) 953-6299*
>
> [image: https://www.linkedin.com//in/chaibapat25]
> [image: https://www.facebook.com/chaibapat]
> [image:
> https://twitter.com/ChaiBapchya] [image:
> https://www.linkedin.com//in/chaibapat25]
> 


Re: Python2 End of Life

2019-05-13 Thread Jake Lee
+1 Recently I upgraded the Numpy version and found out that Pylint had
false alarm on it. The Pylint fix is only available on Python3. So I
changed the default python version of 'make pylint' command to python3 (PR
haven't been merged). It's time to drop support for Python2.

On Mon, May 13, 2019 at 1:37 PM Junru Shao  wrote:

> +1
>
> On Mon, May 13, 2019 at 1:34 PM Aaron Markham 
> wrote:
>
> > +1 for the pledge and to start moving things to Python 3.
> > I think our installation instructions and tutorials can be updated to
> > default to Python3 and we should update Python2-only tutorials. I know
> > we have a handful of those, and when I spot them, I'll create an
> > issue.
> > I can also look at migrating the docs build to Python 3.
> > Should we add a new label for issues relating to migrating to Python3?
> > Cheers,
> > Aaron
> >
> > On Mon, May 13, 2019 at 12:04 PM Zach Kimberg  >
> > wrote:
> > >
> > > Right now, the official date for ending support for Python 2.7 (and all
> > of
> > > python2) is set to January 1 [1]. As part of it, a number of projects
> > have
> > > pledged to drop support for Python2 in or before 2020 including
> > Tensorflow,
> > > requests, pandas, ipython, numpy, pillow, and Cython [2]. I believe we
> > > should also join in this pledge on python3statement.org [2] because it
> > > would help clean up our project and it would be difficult to continue
> > > supporting Python2 anyway when some of our dependencies are dropping
> > > support.
> > >
> > > As a concrete step, we should decide on a date to remove all usages of
> > > Python2 from our CI and consider that officially dropping support.
> > > Following that, we can expect PRs will end up breaking support for
> > Python2.
> > > I suggest just using the same date that Python is dropping support of
> > > January 1. We may also need to update some examples or scripts that
> were
> > > written only for python2 that are around the project. Any thoughts?
> > >
> > > Zach
> > >
> > >
> > > [1] - https://www.python.org/dev/peps/pep-0373/
> > > [2] - https://python3statement.org/
> >
>


Re: Python2 End of Life

2019-05-13 Thread Yuan Tang
+1

On Mon, May 13, 2019 at 4:37 PM Junru Shao  wrote:

> +1
>
> On Mon, May 13, 2019 at 1:34 PM Aaron Markham 
> wrote:
>
> > +1 for the pledge and to start moving things to Python 3.
> > I think our installation instructions and tutorials can be updated to
> > default to Python3 and we should update Python2-only tutorials. I know
> > we have a handful of those, and when I spot them, I'll create an
> > issue.
> > I can also look at migrating the docs build to Python 3.
> > Should we add a new label for issues relating to migrating to Python3?
> > Cheers,
> > Aaron
> >
> > On Mon, May 13, 2019 at 12:04 PM Zach Kimberg  >
> > wrote:
> > >
> > > Right now, the official date for ending support for Python 2.7 (and all
> > of
> > > python2) is set to January 1 [1]. As part of it, a number of projects
> > have
> > > pledged to drop support for Python2 in or before 2020 including
> > Tensorflow,
> > > requests, pandas, ipython, numpy, pillow, and Cython [2]. I believe we
> > > should also join in this pledge on python3statement.org [2] because it
> > > would help clean up our project and it would be difficult to continue
> > > supporting Python2 anyway when some of our dependencies are dropping
> > > support.
> > >
> > > As a concrete step, we should decide on a date to remove all usages of
> > > Python2 from our CI and consider that officially dropping support.
> > > Following that, we can expect PRs will end up breaking support for
> > Python2.
> > > I suggest just using the same date that Python is dropping support of
> > > January 1. We may also need to update some examples or scripts that
> were
> > > written only for python2 that are around the project. Any thoughts?
> > >
> > > Zach
> > >
> > >
> > > [1] - https://www.python.org/dev/peps/pep-0373/
> > > [2] - https://python3statement.org/
> >
>


Re: Python2 End of Life

2019-05-13 Thread Junru Shao
+1

On Mon, May 13, 2019 at 1:34 PM Aaron Markham 
wrote:

> +1 for the pledge and to start moving things to Python 3.
> I think our installation instructions and tutorials can be updated to
> default to Python3 and we should update Python2-only tutorials. I know
> we have a handful of those, and when I spot them, I'll create an
> issue.
> I can also look at migrating the docs build to Python 3.
> Should we add a new label for issues relating to migrating to Python3?
> Cheers,
> Aaron
>
> On Mon, May 13, 2019 at 12:04 PM Zach Kimberg 
> wrote:
> >
> > Right now, the official date for ending support for Python 2.7 (and all
> of
> > python2) is set to January 1 [1]. As part of it, a number of projects
> have
> > pledged to drop support for Python2 in or before 2020 including
> Tensorflow,
> > requests, pandas, ipython, numpy, pillow, and Cython [2]. I believe we
> > should also join in this pledge on python3statement.org [2] because it
> > would help clean up our project and it would be difficult to continue
> > supporting Python2 anyway when some of our dependencies are dropping
> > support.
> >
> > As a concrete step, we should decide on a date to remove all usages of
> > Python2 from our CI and consider that officially dropping support.
> > Following that, we can expect PRs will end up breaking support for
> Python2.
> > I suggest just using the same date that Python is dropping support of
> > January 1. We may also need to update some examples or scripts that were
> > written only for python2 that are around the project. Any thoughts?
> >
> > Zach
> >
> >
> > [1] - https://www.python.org/dev/peps/pep-0373/
> > [2] - https://python3statement.org/
>


Re: Python2 End of Life

2019-05-13 Thread Aaron Markham
+1 for the pledge and to start moving things to Python 3.
I think our installation instructions and tutorials can be updated to
default to Python3 and we should update Python2-only tutorials. I know
we have a handful of those, and when I spot them, I'll create an
issue.
I can also look at migrating the docs build to Python 3.
Should we add a new label for issues relating to migrating to Python3?
Cheers,
Aaron

On Mon, May 13, 2019 at 12:04 PM Zach Kimberg  wrote:
>
> Right now, the official date for ending support for Python 2.7 (and all of
> python2) is set to January 1 [1]. As part of it, a number of projects have
> pledged to drop support for Python2 in or before 2020 including Tensorflow,
> requests, pandas, ipython, numpy, pillow, and Cython [2]. I believe we
> should also join in this pledge on python3statement.org [2] because it
> would help clean up our project and it would be difficult to continue
> supporting Python2 anyway when some of our dependencies are dropping
> support.
>
> As a concrete step, we should decide on a date to remove all usages of
> Python2 from our CI and consider that officially dropping support.
> Following that, we can expect PRs will end up breaking support for Python2.
> I suggest just using the same date that Python is dropping support of
> January 1. We may also need to update some examples or scripts that were
> written only for python2 that are around the project. Any thoughts?
>
> Zach
>
>
> [1] - https://www.python.org/dev/peps/pep-0373/
> [2] - https://python3statement.org/


Re: [Proposal] MXNet operator benchmark library

2019-05-13 Thread Naveen Swamy
Sandeep,

Thanks for initiating work on individual operator performance. However I
find the proposed approach(ie., a separate libary/framework) to unnecessary
and increases maintenance overhead for the project.
Also, have you considered alternate approaches to achieve the same goal.?

Many of the requirements/motivations you have mentioned typically should be
covered in unit-tests(different data-types/ different dimensions), so
instead of having to rewrite for all operators measuring performance,
consider writing a @timeit routine(using Python decorators) which can be
called on individual unit tests.  Also even if you call the performance
script from Python, typically you want to measure as close to the kernel as
possible and avoid any other variables.

I left some comments on the doc itself.

Happy to discuss further.

-Naveen


On Mon, Apr 29, 2019 at 1:57 PM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> Hello Community,
>
> I am currently working on building a utility/library to help us easily do
> individual operator benchmarking in MXNet. I have documented the proposal
> in
> this cwiki
> <
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Operator+Benchmarks
> >,
> and staging the current development in this github repository
> . Proposal
> is to get this library under incubator-mxnet/benchmark/
> . Please
> do review and provide your feedback and suggestions.
>
> Thanks to fellow MXNet community members - Lin, Sam, Rohit for providing
> initial ideas and suggestion.
>
> Best,
> Sandeep
>
>
>
>
> --
> Sandeep Krishnamurthy
>


Re: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-13 Thread Sheng Zha
Thanks to the help from mentors, our vote on general@incubator is set to pass.

I'm sharing the issues mentioned in the vote that need us to fix before next 
release:
- Standard way to run rat, see [1]
- cpp-package/example/get_data.sh and similar scripts should use canonical URL 
for the MNIST data and mention the license (P.S. [2] mentions CC BY-SA 3.0 but 
the original link [3] didn't mention license. we may need to clarify this first)

And thank all who contributed to this release and thanks to Junru who drove the 
release work.

-sz

[1] https://github.com/apache/incubator-mxnet/issues/14936
[2] http://www.pymvpa.org/datadb/mnist.html
[3] http://yann.lecun.com/exdb/mnist/

On 2019/05/09 16:55:06, Hen  wrote: 
> Noting that I am a belated +1 on the release.
> 
> I had one item regarding dataset licensing that I’d like to see improved
> for the next release, but I don’t believe it would have been a blocker.
> 
> Hen
> 
> On Sat, May 4, 2019 at 00:00 Junru Shao  wrote:
> 
> > Dear MXNet community,
> >
> > I'm happy to announce the results of the vote.
> >
> > This vote passes with 12 +1 votes (3 binding), no 0 votes, and 1 -1 vote.
> > +1 votes
> > * Sheng Zha / binding
> > * Qing Lan / binding
> > * Carin Meier / binding
> > * Aaron Markham
> > * Pedro Larroy
> > * Lai Wei
> > * Damien Stanton
> > * Kellen Sunderland
> > * Yuxi Hu
> > * Joshua Z. Zhang
> > * Philip Hyunsu Cho
> > * Aston Zhang
> >
> > 0 votes
> > * No votes
> >
> > -1 votes
> > * Anirudh Subramanian
> >
> > Vote thread can be found here [1]. The list of members can be found here
> > [2].
> >
> > I'll continue with the release process and the release announcement will
> > follow in the next few days.
> >
> > Best regards,
> > Junru Shao
> >
> > [1]
> >
> > https://lists.apache.org/thread.html/6c140f4c180c259dd1b7f4ecf36f2d083ed810cd68b37d7f635f5614@%3Cdev.mxnet.apache.org%3E
> > [2] http://incubator.apache.org/projects/mxnet.html
> >
> 


Re: Unable to comment on GitHub issue

2019-05-10 Thread Naveen Swamy
Everything is in place, don't have any problems on other issues and PRs except 
a few of them. 

Certainly Apache Infra should override the setting on Apache projects to avoid 
abuse of the feature.

> On Fri, May 10, 2019 at 3:42 AM Marco de Abreu  
> wrote:
> Do you have 2 factor authentication enabled? Apache requires committers to
> have it enabled or they automatically revoke all permissions.
> 
> Also, check that https://gitbox.apache.org/setup/ is all green and MXNet is
> listed as repository in the bottom half.
> 
> -Marco
> 
> Sheng Zha  schrieb am Fr., 10. Mai 2019, 02:47:
> 
> > Locking a conversation wouldn't limit a committer from commenting. "While
> > a conversation is locked, only people with write access and repository
> > owners and collaborators can add comments." [1]
> >
> > Unless the apache organization has the blocking setting, blocking by a
> > person shouldn't limit one from commenting on issues in mxnet repo either.
> > The organization that owns the repo needs to explicitly block the person to
> > be able to prevent one from commenting on an issue in the repo of that
> > organization. [2]
> >
> > -sz
> >
> > [1] https://help.github.com/en/articles/locking-conversations
> > [2]
> > https://help.github.com/en/articles/blocking-a-user-from-your-personal-account
> >
> > On 2019/05/09 23:33:00, Aaron Markham  wrote:
> > > I just locked one of the issues I created:
> > > https://github.com/apache/incubator-mxnet/issues/14918
> > > Are you sure you don't have the unlock button on the right side?
> > > You should see this:
> > >
> > > aaronmarkham locked as off topic and limited conversation to
> > > collaborators 24 seconds from now
> > >
> > > Then to the right of that:
> > >
> > >  Unlock conversation
> > >  Pin issue
> > >
> > > On Thu, May 9, 2019 at 4:27 PM Naveen Swamy  wrote:
> > > >
> > > > I don't see the option, another possible explanation someone must have
> > blocked me, if that is the case it goes against the ethos of Open source.
> > > > Apache infra should override that setting for Apache projects. Anyway
> > I created this Jira.
> > > >
> > https://issues.apache.org/jira/plugins/servlet/mobile#issue/INFRA-18356
> > > >
> > > > -Naveen
> > > >
> > > > > On May 9, 2019, at 4:19 PM, Aaron Markham 
> > wrote:
> > > > >
> > > > > A new feature:
> > https://help.github.com/en/articles/locking-conversations
> > > > > So someone must have locked it. I can see the option on the right
> > hand
> > > > > side column, all the way at the bottom. You will probably have the
> > > > > ability to unlock it from there too.
> > > > >
> > > > >> On Thu, May 9, 2019 at 3:42 PM Chaitanya Bapat <
> > chai.ba...@gmail.com> wrote:
> > > > >>
> > > > >> Any specific issues you could give the links to? So I could verify
> > if
> > > > >> that's the case with me.
> > > > >>
> > > > >>> On Thu, 9 May 2019 at 14:44, Naveen Swamy 
> > wrote:
> > > > >>>
> > > > >>> I am unable to comment on certain GitHub issues and see a locked
> > Icon,
> > > > >>> wondering if anyone has experienced this and know why?
> > > > >>>
> > > > >>
> > > > >>
> > > > >> --
> > > > >> *Chaitanya Prakash Bapat*
> > > > >> *+1 (973) 953-6299*
> > > > >>
> > > > >> [image: https://www.linkedin.com//in/chaibapat25]
> > > > >> [image:
> > https://www.facebook.com/chaibapat]
> > > > >> [image:
> > > > >> https://twitter.com/ChaiBapchya]  > >[image:
> > > > >> https://www.linkedin.com//in/chaibapat25]
> > > > >> 
> > >
> >


Re: Unable to comment on GitHub issue

2019-05-10 Thread Marco de Abreu
Do you have 2 factor authentication enabled? Apache requires committers to
have it enabled or they automatically revoke all permissions.

Also, check that https://gitbox.apache.org/setup/ is all green and MXNet is
listed as repository in the bottom half.

-Marco

Sheng Zha  schrieb am Fr., 10. Mai 2019, 02:47:

> Locking a conversation wouldn't limit a committer from commenting. "While
> a conversation is locked, only people with write access and repository
> owners and collaborators can add comments." [1]
>
> Unless the apache organization has the blocking setting, blocking by a
> person shouldn't limit one from commenting on issues in mxnet repo either.
> The organization that owns the repo needs to explicitly block the person to
> be able to prevent one from commenting on an issue in the repo of that
> organization. [2]
>
> -sz
>
> [1] https://help.github.com/en/articles/locking-conversations
> [2]
> https://help.github.com/en/articles/blocking-a-user-from-your-personal-account
>
> On 2019/05/09 23:33:00, Aaron Markham  wrote:
> > I just locked one of the issues I created:
> > https://github.com/apache/incubator-mxnet/issues/14918
> > Are you sure you don't have the unlock button on the right side?
> > You should see this:
> >
> > aaronmarkham locked as off topic and limited conversation to
> > collaborators 24 seconds from now
> >
> > Then to the right of that:
> >
> >  Unlock conversation
> >  Pin issue
> >
> > On Thu, May 9, 2019 at 4:27 PM Naveen Swamy  wrote:
> > >
> > > I don't see the option, another possible explanation someone must have
> blocked me, if that is the case it goes against the ethos of Open source.
> > > Apache infra should override that setting for Apache projects. Anyway
> I created this Jira.
> > >
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/INFRA-18356
> > >
> > > -Naveen
> > >
> > > > On May 9, 2019, at 4:19 PM, Aaron Markham 
> wrote:
> > > >
> > > > A new feature:
> https://help.github.com/en/articles/locking-conversations
> > > > So someone must have locked it. I can see the option on the right
> hand
> > > > side column, all the way at the bottom. You will probably have the
> > > > ability to unlock it from there too.
> > > >
> > > >> On Thu, May 9, 2019 at 3:42 PM Chaitanya Bapat <
> chai.ba...@gmail.com> wrote:
> > > >>
> > > >> Any specific issues you could give the links to? So I could verify
> if
> > > >> that's the case with me.
> > > >>
> > > >>> On Thu, 9 May 2019 at 14:44, Naveen Swamy 
> wrote:
> > > >>>
> > > >>> I am unable to comment on certain GitHub issues and see a locked
> Icon,
> > > >>> wondering if anyone has experienced this and know why?
> > > >>>
> > > >>
> > > >>
> > > >> --
> > > >> *Chaitanya Prakash Bapat*
> > > >> *+1 (973) 953-6299*
> > > >>
> > > >> [image: https://www.linkedin.com//in/chaibapat25]
> > > >> [image:
> https://www.facebook.com/chaibapat]
> > > >> [image:
> > > >> https://twitter.com/ChaiBapchya]  >[image:
> > > >> https://www.linkedin.com//in/chaibapat25]
> > > >> 
> >
>


Re: Unable to comment on GitHub issue

2019-05-09 Thread Sheng Zha
Locking a conversation wouldn't limit a committer from commenting. "While a 
conversation is locked, only people with write access and repository owners and 
collaborators can add comments." [1]

Unless the apache organization has the blocking setting, blocking by a person 
shouldn't limit one from commenting on issues in mxnet repo either. The 
organization that owns the repo needs to explicitly block the person to be able 
to prevent one from commenting on an issue in the repo of that organization. [2]

-sz

[1] https://help.github.com/en/articles/locking-conversations
[2] 
https://help.github.com/en/articles/blocking-a-user-from-your-personal-account

On 2019/05/09 23:33:00, Aaron Markham  wrote: 
> I just locked one of the issues I created:
> https://github.com/apache/incubator-mxnet/issues/14918
> Are you sure you don't have the unlock button on the right side?
> You should see this:
> 
> aaronmarkham locked as off topic and limited conversation to
> collaborators 24 seconds from now
> 
> Then to the right of that:
> 
>  Unlock conversation
>  Pin issue
> 
> On Thu, May 9, 2019 at 4:27 PM Naveen Swamy  wrote:
> >
> > I don't see the option, another possible explanation someone must have 
> > blocked me, if that is the case it goes against the ethos of Open source.
> > Apache infra should override that setting for Apache projects. Anyway I 
> > created this Jira.
> > https://issues.apache.org/jira/plugins/servlet/mobile#issue/INFRA-18356
> >
> > -Naveen
> >
> > > On May 9, 2019, at 4:19 PM, Aaron Markham  
> > > wrote:
> > >
> > > A new feature: https://help.github.com/en/articles/locking-conversations
> > > So someone must have locked it. I can see the option on the right hand
> > > side column, all the way at the bottom. You will probably have the
> > > ability to unlock it from there too.
> > >
> > >> On Thu, May 9, 2019 at 3:42 PM Chaitanya Bapat  
> > >> wrote:
> > >>
> > >> Any specific issues you could give the links to? So I could verify if
> > >> that's the case with me.
> > >>
> > >>> On Thu, 9 May 2019 at 14:44, Naveen Swamy  wrote:
> > >>>
> > >>> I am unable to comment on certain GitHub issues and see a locked Icon,
> > >>> wondering if anyone has experienced this and know why?
> > >>>
> > >>
> > >>
> > >> --
> > >> *Chaitanya Prakash Bapat*
> > >> *+1 (973) 953-6299*
> > >>
> > >> [image: https://www.linkedin.com//in/chaibapat25]
> > >> [image: 
> > >> https://www.facebook.com/chaibapat]
> > >> [image:
> > >> https://twitter.com/ChaiBapchya] [image:
> > >> https://www.linkedin.com//in/chaibapat25]
> > >> 
> 


Re: [DISCUSS] AWS Credits for External Contributors

2019-05-09 Thread Chaitanya Bapat
Sure. I'll use the AWS Educate route (Google Colab or AWS SageMaker would
be great for an MXNet User, but I wanted to build and test. Moreover, for
memory profiling, need access to an instance with GPU more than anything
else). But anyway, I'll use AWS Educate.

Thanks for the quick response.

On Thu, 9 May 2019 at 19:08, Aaron Markham 
wrote:

> One option is Amazon Educate. https://aws.amazon.com/education/awseducate/
> Last I checked, you can get $75/month AWS credit as a student or
> educator. If you belong to an educational organization, your org can
> apply on your behalf and get anyone with that org's domain easier
> access to the credits. Or something like that.
>
> Another route is you might be able to load your test/work into a
> notebook and run it on Google Colab. Vandana has this neat DCGAN with
> MXNet notebook running there.
>
> https://colab.research.google.com/github/vandanavk/mxnet-gluon-gan/blob/dcgan/dcgan/dcgan.ipynb
>
> Will either of those work for you?
>
> Cheers,
> Aaron
>
> On Thu, May 9, 2019 at 11:30 AM Chaitanya Bapat 
> wrote:
> >
> > Hello MXNet community,
> >
> > I was curious to know if there is any possibility of AWS Credits
> > provisioned for external contributors of Apache MXNet. It would be a
> great
> > incentive for more external contributions and in turn more external
> > contributors.
> >
> > Background -
> > Today, while trying to work on Anirudh's Memory profiling for MXNet PR, I
> > realized I am short of AWS credits on my personal account. My personal
> > computer (Mac 2017) doesn't have Nvidia GPU and hence I'm a bit stuck.
> >
> > I don't know if there are others who have faced a similar situation. If
> > that's the case, maybe we can find a solution through free AWS Credits.
> >
> > Thanks,
> > Chai
> >
> > --
> > *Chaitanya Prakash Bapat*
> > *+1 (973) 953-6299*
> >
> > [image: https://www.linkedin.com//in/chaibapat25]
> > [image:
> https://www.facebook.com/chaibapat]
> > [image:
> > https://twitter.com/ChaiBapchya]  >[image:
> > https://www.linkedin.com//in/chaibapat25]
> > 
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: Unable to comment on GitHub issue

2019-05-09 Thread Aaron Markham
I just locked one of the issues I created:
https://github.com/apache/incubator-mxnet/issues/14918
Are you sure you don't have the unlock button on the right side?
You should see this:

aaronmarkham locked as off topic and limited conversation to
collaborators 24 seconds from now

Then to the right of that:

 Unlock conversation
 Pin issue

On Thu, May 9, 2019 at 4:27 PM Naveen Swamy  wrote:
>
> I don't see the option, another possible explanation someone must have 
> blocked me, if that is the case it goes against the ethos of Open source.
> Apache infra should override that setting for Apache projects. Anyway I 
> created this Jira.
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/INFRA-18356
>
> -Naveen
>
> > On May 9, 2019, at 4:19 PM, Aaron Markham  wrote:
> >
> > A new feature: https://help.github.com/en/articles/locking-conversations
> > So someone must have locked it. I can see the option on the right hand
> > side column, all the way at the bottom. You will probably have the
> > ability to unlock it from there too.
> >
> >> On Thu, May 9, 2019 at 3:42 PM Chaitanya Bapat  
> >> wrote:
> >>
> >> Any specific issues you could give the links to? So I could verify if
> >> that's the case with me.
> >>
> >>> On Thu, 9 May 2019 at 14:44, Naveen Swamy  wrote:
> >>>
> >>> I am unable to comment on certain GitHub issues and see a locked Icon,
> >>> wondering if anyone has experienced this and know why?
> >>>
> >>
> >>
> >> --
> >> *Chaitanya Prakash Bapat*
> >> *+1 (973) 953-6299*
> >>
> >> [image: https://www.linkedin.com//in/chaibapat25]
> >> [image: https://www.facebook.com/chaibapat]
> >> [image:
> >> https://twitter.com/ChaiBapchya] [image:
> >> https://www.linkedin.com//in/chaibapat25]
> >> 


Re: Unable to comment on GitHub issue

2019-05-09 Thread Naveen Swamy
I don't see the option, another possible explanation someone must have blocked 
me, if that is the case it goes against the ethos of Open source.
Apache infra should override that setting for Apache projects. Anyway I created 
this Jira.
https://issues.apache.org/jira/plugins/servlet/mobile#issue/INFRA-18356

-Naveen

> On May 9, 2019, at 4:19 PM, Aaron Markham  wrote:
> 
> A new feature: https://help.github.com/en/articles/locking-conversations
> So someone must have locked it. I can see the option on the right hand
> side column, all the way at the bottom. You will probably have the
> ability to unlock it from there too.
> 
>> On Thu, May 9, 2019 at 3:42 PM Chaitanya Bapat  wrote:
>> 
>> Any specific issues you could give the links to? So I could verify if
>> that's the case with me.
>> 
>>> On Thu, 9 May 2019 at 14:44, Naveen Swamy  wrote:
>>> 
>>> I am unable to comment on certain GitHub issues and see a locked Icon,
>>> wondering if anyone has experienced this and know why?
>>> 
>> 
>> 
>> --
>> *Chaitanya Prakash Bapat*
>> *+1 (973) 953-6299*
>> 
>> [image: https://www.linkedin.com//in/chaibapat25]
>> [image: https://www.facebook.com/chaibapat]
>> [image:
>> https://twitter.com/ChaiBapchya] [image:
>> https://www.linkedin.com//in/chaibapat25]
>> 


Re: Unable to comment on GitHub issue

2019-05-09 Thread Aaron Markham
A new feature: https://help.github.com/en/articles/locking-conversations
So someone must have locked it. I can see the option on the right hand
side column, all the way at the bottom. You will probably have the
ability to unlock it from there too.

On Thu, May 9, 2019 at 3:42 PM Chaitanya Bapat  wrote:
>
> Any specific issues you could give the links to? So I could verify if
> that's the case with me.
>
> On Thu, 9 May 2019 at 14:44, Naveen Swamy  wrote:
>
> > I am unable to comment on certain GitHub issues and see a locked Icon,
> > wondering if anyone has experienced this and know why?
> >
>
>
> --
> *Chaitanya Prakash Bapat*
> *+1 (973) 953-6299*
>
> [image: https://www.linkedin.com//in/chaibapat25]
> [image: https://www.facebook.com/chaibapat]
> [image:
> https://twitter.com/ChaiBapchya] [image:
> https://www.linkedin.com//in/chaibapat25]
> 


Re: [DISCUSS] AWS Credits for External Contributors

2019-05-09 Thread Aaron Markham
One option is Amazon Educate. https://aws.amazon.com/education/awseducate/
Last I checked, you can get $75/month AWS credit as a student or
educator. If you belong to an educational organization, your org can
apply on your behalf and get anyone with that org's domain easier
access to the credits. Or something like that.

Another route is you might be able to load your test/work into a
notebook and run it on Google Colab. Vandana has this neat DCGAN with
MXNet notebook running there.
https://colab.research.google.com/github/vandanavk/mxnet-gluon-gan/blob/dcgan/dcgan/dcgan.ipynb

Will either of those work for you?

Cheers,
Aaron

On Thu, May 9, 2019 at 11:30 AM Chaitanya Bapat  wrote:
>
> Hello MXNet community,
>
> I was curious to know if there is any possibility of AWS Credits
> provisioned for external contributors of Apache MXNet. It would be a great
> incentive for more external contributions and in turn more external
> contributors.
>
> Background -
> Today, while trying to work on Anirudh's Memory profiling for MXNet PR, I
> realized I am short of AWS credits on my personal account. My personal
> computer (Mac 2017) doesn't have Nvidia GPU and hence I'm a bit stuck.
>
> I don't know if there are others who have faced a similar situation. If
> that's the case, maybe we can find a solution through free AWS Credits.
>
> Thanks,
> Chai
>
> --
> *Chaitanya Prakash Bapat*
> *+1 (973) 953-6299*
>
> [image: https://www.linkedin.com//in/chaibapat25]
> [image: https://www.facebook.com/chaibapat]
> [image:
> https://twitter.com/ChaiBapchya] [image:
> https://www.linkedin.com//in/chaibapat25]
> 


Re: Unable to comment on GitHub issue

2019-05-09 Thread Chaitanya Bapat
Any specific issues you could give the links to? So I could verify if
that's the case with me.

On Thu, 9 May 2019 at 14:44, Naveen Swamy  wrote:

> I am unable to comment on certain GitHub issues and see a locked Icon,
> wondering if anyone has experienced this and know why?
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: [Announcement] New Committer - Zach Kimberg

2019-05-09 Thread Chaitanya Bapat
Congratulations Zachary! Way to go!

On Thu, 9 May 2019 at 14:01, Carin Meier  wrote:

> Congrats!
>
> On Thu, May 9, 2019 at 1:41 PM Per da Silva  wrote:
>
> > Nice one! Congratulations =)
> >
> > On Thu, May 9, 2019 at 7:38 PM Jake Lee  wrote:
> >
> > > Congrat!
> > >
> > > On Thu, May 9, 2019 at 10:37 AM Yuan Tang 
> > wrote:
> > >
> > > > Welcome!
> > > >
> > > > On Thu, May 9, 2019 at 1:36 PM Marco de Abreu <
> marco.g.ab...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Welcome!
> > > > >
> > > > > Hagay Lupesko  schrieb am Do., 9. Mai 2019,
> > 19:33:
> > > > >
> > > > > > Congratulations Zach - well deserved!
> > > > > >
> > > > > > On Thu, May 9, 2019, 13:26 Qing Lan  wrote:
> > > > > >
> > > > > > > Hi All,
> > > > > > >
> > > > > > > Please join me in welcoming Zach Kimberg (
> > > https://github.com/zachgk)
> > > > > as
> > > > > > a
> > > > > > > new committer.
> > > > > > >
> > > > > > > He has been solving some important bugs in MXNet JVM with
> respect
> > > to
> > > > > > usage
> > > > > > > improvement, build issues and a lot more. He also created the
> > > Jenkins
> > > > > > based
> > > > > > > publish pipeline for us to have standard way to build and test
> > > > > > > static-linked package conveniently for everyone in the
> community.
> > > > > > Moreover,
> > > > > > > he solved a bunch of License problems we have in MXNet and
> > brought
> > > > > > several
> > > > > > > fixes to let us get 1.4.0 release on time.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Qing
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: [Announcement] New Committer - Zach Kimberg

2019-05-09 Thread Carin Meier
Congrats!

On Thu, May 9, 2019 at 1:41 PM Per da Silva  wrote:

> Nice one! Congratulations =)
>
> On Thu, May 9, 2019 at 7:38 PM Jake Lee  wrote:
>
> > Congrat!
> >
> > On Thu, May 9, 2019 at 10:37 AM Yuan Tang 
> wrote:
> >
> > > Welcome!
> > >
> > > On Thu, May 9, 2019 at 1:36 PM Marco de Abreu  >
> > > wrote:
> > >
> > > > Welcome!
> > > >
> > > > Hagay Lupesko  schrieb am Do., 9. Mai 2019,
> 19:33:
> > > >
> > > > > Congratulations Zach - well deserved!
> > > > >
> > > > > On Thu, May 9, 2019, 13:26 Qing Lan  wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > Please join me in welcoming Zach Kimberg (
> > https://github.com/zachgk)
> > > > as
> > > > > a
> > > > > > new committer.
> > > > > >
> > > > > > He has been solving some important bugs in MXNet JVM with respect
> > to
> > > > > usage
> > > > > > improvement, build issues and a lot more. He also created the
> > Jenkins
> > > > > based
> > > > > > publish pipeline for us to have standard way to build and test
> > > > > > static-linked package conveniently for everyone in the community.
> > > > > Moreover,
> > > > > > he solved a bunch of License problems we have in MXNet and
> brought
> > > > > several
> > > > > > fixes to let us get 1.4.0 release on time.
> > > > > >
> > > > > > Thanks,
> > > > > > Qing
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [Announcement] New Committer - Zach Kimberg

2019-05-09 Thread Yuan Tang
Welcome!

On Thu, May 9, 2019 at 1:36 PM Marco de Abreu 
wrote:

> Welcome!
>
> Hagay Lupesko  schrieb am Do., 9. Mai 2019, 19:33:
>
> > Congratulations Zach - well deserved!
> >
> > On Thu, May 9, 2019, 13:26 Qing Lan  wrote:
> >
> > > Hi All,
> > >
> > > Please join me in welcoming Zach Kimberg (https://github.com/zachgk)
> as
> > a
> > > new committer.
> > >
> > > He has been solving some important bugs in MXNet JVM with respect to
> > usage
> > > improvement, build issues and a lot more. He also created the Jenkins
> > based
> > > publish pipeline for us to have standard way to build and test
> > > static-linked package conveniently for everyone in the community.
> > Moreover,
> > > he solved a bunch of License problems we have in MXNet and brought
> > several
> > > fixes to let us get 1.4.0 release on time.
> > >
> > > Thanks,
> > > Qing
> > >
> >
>


Re: [Announcement] New Committer - Zach Kimberg

2019-05-09 Thread Jake Lee
Congrat!

On Thu, May 9, 2019 at 10:37 AM Yuan Tang  wrote:

> Welcome!
>
> On Thu, May 9, 2019 at 1:36 PM Marco de Abreu 
> wrote:
>
> > Welcome!
> >
> > Hagay Lupesko  schrieb am Do., 9. Mai 2019, 19:33:
> >
> > > Congratulations Zach - well deserved!
> > >
> > > On Thu, May 9, 2019, 13:26 Qing Lan  wrote:
> > >
> > > > Hi All,
> > > >
> > > > Please join me in welcoming Zach Kimberg (https://github.com/zachgk)
> > as
> > > a
> > > > new committer.
> > > >
> > > > He has been solving some important bugs in MXNet JVM with respect to
> > > usage
> > > > improvement, build issues and a lot more. He also created the Jenkins
> > > based
> > > > publish pipeline for us to have standard way to build and test
> > > > static-linked package conveniently for everyone in the community.
> > > Moreover,
> > > > he solved a bunch of License problems we have in MXNet and brought
> > > several
> > > > fixes to let us get 1.4.0 release on time.
> > > >
> > > > Thanks,
> > > > Qing
> > > >
> > >
> >
>


Re: [Announcement] New Committer - Zach Kimberg

2019-05-09 Thread Per da Silva
Nice one! Congratulations =)

On Thu, May 9, 2019 at 7:38 PM Jake Lee  wrote:

> Congrat!
>
> On Thu, May 9, 2019 at 10:37 AM Yuan Tang  wrote:
>
> > Welcome!
> >
> > On Thu, May 9, 2019 at 1:36 PM Marco de Abreu 
> > wrote:
> >
> > > Welcome!
> > >
> > > Hagay Lupesko  schrieb am Do., 9. Mai 2019, 19:33:
> > >
> > > > Congratulations Zach - well deserved!
> > > >
> > > > On Thu, May 9, 2019, 13:26 Qing Lan  wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > Please join me in welcoming Zach Kimberg (
> https://github.com/zachgk)
> > > as
> > > > a
> > > > > new committer.
> > > > >
> > > > > He has been solving some important bugs in MXNet JVM with respect
> to
> > > > usage
> > > > > improvement, build issues and a lot more. He also created the
> Jenkins
> > > > based
> > > > > publish pipeline for us to have standard way to build and test
> > > > > static-linked package conveniently for everyone in the community.
> > > > Moreover,
> > > > > he solved a bunch of License problems we have in MXNet and brought
> > > > several
> > > > > fixes to let us get 1.4.0 release on time.
> > > > >
> > > > > Thanks,
> > > > > Qing
> > > > >
> > > >
> > >
> >
>


Re: [Announcement] New Committer - Zach Kimberg

2019-05-09 Thread Marco de Abreu
Welcome!

Hagay Lupesko  schrieb am Do., 9. Mai 2019, 19:33:

> Congratulations Zach - well deserved!
>
> On Thu, May 9, 2019, 13:26 Qing Lan  wrote:
>
> > Hi All,
> >
> > Please join me in welcoming Zach Kimberg (https://github.com/zachgk) as
> a
> > new committer.
> >
> > He has been solving some important bugs in MXNet JVM with respect to
> usage
> > improvement, build issues and a lot more. He also created the Jenkins
> based
> > publish pipeline for us to have standard way to build and test
> > static-linked package conveniently for everyone in the community.
> Moreover,
> > he solved a bunch of License problems we have in MXNet and brought
> several
> > fixes to let us get 1.4.0 release on time.
> >
> > Thanks,
> > Qing
> >
>


Re: [Announcement] New Committer - Zach Kimberg

2019-05-09 Thread Hagay Lupesko
Congratulations Zach - well deserved!

On Thu, May 9, 2019, 13:26 Qing Lan  wrote:

> Hi All,
>
> Please join me in welcoming Zach Kimberg (https://github.com/zachgk) as a
> new committer.
>
> He has been solving some important bugs in MXNet JVM with respect to usage
> improvement, build issues and a lot more. He also created the Jenkins based
> publish pipeline for us to have standard way to build and test
> static-linked package conveniently for everyone in the community. Moreover,
> he solved a bunch of License problems we have in MXNet and brought several
> fixes to let us get 1.4.0 release on time.
>
> Thanks,
> Qing
>


Re: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-09 Thread Hen
Noting that I am a belated +1 on the release.

I had one item regarding dataset licensing that I’d like to see improved
for the next release, but I don’t believe it would have been a blocker.

Hen

On Sat, May 4, 2019 at 00:00 Junru Shao  wrote:

> Dear MXNet community,
>
> I'm happy to announce the results of the vote.
>
> This vote passes with 12 +1 votes (3 binding), no 0 votes, and 1 -1 vote.
> +1 votes
> * Sheng Zha / binding
> * Qing Lan / binding
> * Carin Meier / binding
> * Aaron Markham
> * Pedro Larroy
> * Lai Wei
> * Damien Stanton
> * Kellen Sunderland
> * Yuxi Hu
> * Joshua Z. Zhang
> * Philip Hyunsu Cho
> * Aston Zhang
>
> 0 votes
> * No votes
>
> -1 votes
> * Anirudh Subramanian
>
> Vote thread can be found here [1]. The list of members can be found here
> [2].
>
> I'll continue with the release process and the release announcement will
> follow in the next few days.
>
> Best regards,
> Junru Shao
>
> [1]
>
> https://lists.apache.org/thread.html/6c140f4c180c259dd1b7f4ecf36f2d083ed810cd68b37d7f635f5614@%3Cdev.mxnet.apache.org%3E
> [2] http://incubator.apache.org/projects/mxnet.html
>


Re: [QUESTION] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-08 Thread Hen
Thanks Sheng. I assume MNIST covers all 5 data files.

Given that's a license with stronger conditions than the Apache license, I
think the example should give a canonical URL for the data (i.e. a homepage
of some kind) and mention the CC-BY-SA-3.0 licensing.

Hen

On Wed, May 8, 2019 at 1:03 PM Sheng Zha  wrote:

> MNIST dataset is under CC BY-SA 3.0 and is widely redistributed.
>
>
> -sz
>
> On Wed, May 8, 2019 at 12:57 PM Hen  wrote:
>
> > Looking at
> > apache-mxnet-src-1.4.1.rc0-incubating/cpp-package/example/get_data.sh -
> > what's the license on the data that is being pulled in?
> >
> > Namely:
> >
> > "
> >
> >
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/train-images-idx3-ubyte.gz
> > "
> > "
> >
> >
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/train-labels-idx1-ubyte.gz
> > "
> > "
> >
> >
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/t10k-images-idx3-ubyte.gz
> > "
> > "
> >
> >
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/t10k-labels-idx1-ubyte.gz
> > "
> > "http://data.mxnet.io/data/mnist_train.csv.gz;
> >
> > Thanks,
> >
> > Hen
> >
> > On Mon, Apr 29, 2019 at 11:52 PM Junru Shao 
> > wrote:
> >
> > > Dear MXNet community,
> > >
> > > This is the 3-day vote to release Apache MXNet (incubating) version
> > v1.4.1.
> > > The voting on dev@ list will start Apr 29 23:59:59 (PST) and close on
> > May
> > > 02 23:59:59.
> > >
> > > Below are links to
> > > 1) Release notes:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.1+Release+Notes
> > > .
> > > 2) Release Candidate:
> > > https://github.com/apache/incubator-mxnet/releases/tag/1.4.1.rc0.
> > > 3) Source and signatures on Apache dist server:
> > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.1.rc0/.
> > >
> > > Please remember to TEST first before voting accordingly:
> > > +1 = approve
> > > +0 = no opinion
> > > -1 = disapprove (provide reason)
> > >
> > > Best regards,
> > > Junru Shao
> > >
> >
>


Re: [QUESTION] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-08 Thread Sheng Zha
MNIST dataset is under CC BY-SA 3.0 and is widely redistributed.


-sz

On Wed, May 8, 2019 at 12:57 PM Hen  wrote:

> Looking at
> apache-mxnet-src-1.4.1.rc0-incubating/cpp-package/example/get_data.sh -
> what's the license on the data that is being pulled in?
>
> Namely:
>
> "
>
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/train-images-idx3-ubyte.gz
> "
> "
>
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/train-labels-idx1-ubyte.gz
> "
> "
>
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/t10k-images-idx3-ubyte.gz
> "
> "
>
> https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/t10k-labels-idx1-ubyte.gz
> "
> "http://data.mxnet.io/data/mnist_train.csv.gz;
>
> Thanks,
>
> Hen
>
> On Mon, Apr 29, 2019 at 11:52 PM Junru Shao 
> wrote:
>
> > Dear MXNet community,
> >
> > This is the 3-day vote to release Apache MXNet (incubating) version
> v1.4.1.
> > The voting on dev@ list will start Apr 29 23:59:59 (PST) and close on
> May
> > 02 23:59:59.
> >
> > Below are links to
> > 1) Release notes:
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.1+Release+Notes
> > .
> > 2) Release Candidate:
> > https://github.com/apache/incubator-mxnet/releases/tag/1.4.1.rc0.
> > 3) Source and signatures on Apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.1.rc0/.
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> > Best regards,
> > Junru Shao
> >
>


Re: Requesting slack access

2019-05-08 Thread Anirudh Subramanian
Sent invite!

On Wed, May 8, 2019 at 6:43 AM Sem  wrote:

> Requesting slack access
>
>


Re: [DISCUSS] 1.5.0 Release Plan

2019-05-08 Thread Anirudh Subramanian
Hi Sheng,

I had a discussion with nvidia folks offline today (@ptrendx et. al.). I
strongly feel that the AMP feature should be included as part of the
release: https://github.com/apache/incubator-mxnet/pull/14173 .
The PR is aimed for completion for next week but reviews and RFC
discussions may take some time. I would request to extend the release code
freeze by 2 weeks.
Also, I would like to include
https://cwiki.apache.org/confluence/display/MXNET/Conversion+from+FP32+to+Mixed+Precision+Models
which
depends on the AMP PR.
I am also aiming for adding a PR by this week end or early next week, but
reviews will take longer than May 17th.

Anirudh


On Mon, May 6, 2019 at 11:49 PM Sheng Zha  wrote:

> Hi,
>
> While 1.4.1 vote on general@incubator is still on going, I’d like to
> propose that we start preparing 1.5.0 release.
>
> 1.5.0 will include changes that dates back to last year and there has been
> a lot of new features and improvements in it, so it will likely time us
> more time to prepare than 1.4.1. I propose the following timeline:
> - Cut release branch: release branch already cut. Will sync with master
> branch on 5/15/2019 EOD.
> - Code freeze: 5/17/2019. No more changes unless the release branch is in
> a broken state.
> - Tag and vote: 5/20/2019 onward.
>
> Lai Wei (roywei@) expressed to me offline that he’s willing to help drive
> this release as release manager, and I’m happy to help again as committer.
>
> If you have features in progress that you’d like to include in 1.5.0:
> - Add your feature to the scope:
> https://cwiki.apache.org/confluence/display/MXNET/1.5.0+Release+Plan+and+Status
> - Indicate in this thread:
>   - how confident you are about making it happen before the code freeze.
> If not confident, provide estimate for a more manageable code freeze date
> so that people can discuss whether to extend the deadline or to skip one
> release for it.
> - whether your PR requires more attention to make it happen.
>
> Thanks for your attention. Comments and suggestions are also welcome.
>
> -sz


Re: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-07 Thread Sheng Zha
Correction: Joshua Zhi Zhang is a PPMC member and thus his vote is binding too, 
which brings the +1 binding votes to 4.

-sz

On 2019/05/04 07:23:17, Junru Shao  wrote: 
> As Anirudh changes his vote from -1 to 0 in the voting thread just now, the
> vote results changes to 12 +1 votes (3 binding), 1 0 vote, and no -1 vote.
> 
> Thank you guys again for your hard work testing the release! I will start a
> voting thread on general@.
> 
> Thanks,
> Junru
> 
> 
> On Fri, May 3, 2019 at 11:59 PM Junru Shao  wrote:
> 
> > Dear MXNet community,
> >
> > I'm happy to announce the results of the vote.
> >
> > This vote passes with 12 +1 votes (3 binding), no 0 votes, and 1 -1 vote.
> > +1 votes
> > * Sheng Zha / binding
> > * Qing Lan / binding
> > * Carin Meier / binding
> > * Aaron Markham
> > * Pedro Larroy
> > * Lai Wei
> > * Damien Stanton
> > * Kellen Sunderland
> > * Yuxi Hu
> > * Joshua Z. Zhang
> > * Philip Hyunsu Cho
> > * Aston Zhang
> >
> > 0 votes
> > * No votes
> >
> > -1 votes
> > * Anirudh Subramanian
> >
> > Vote thread can be found here [1]. The list of members can be found here
> > [2].
> >
> > I'll continue with the release process and the release announcement will
> > follow in the next few days.
> >
> > Best regards,
> > Junru Shao
> >
> > [1]
> > https://lists.apache.org/thread.html/6c140f4c180c259dd1b7f4ecf36f2d083ed810cd68b37d7f635f5614@%3Cdev.mxnet.apache.org%3E
> > [2] http://incubator.apache.org/projects/mxnet.html
> >
> 


Re: mxnet slack access

2019-05-06 Thread Sheng Zha
Just invited you. Welcome!

-sz

On 2019/05/06 11:21:49, Geoff Bull  wrote: 
> Please invite me to slack.
> 
> Thanks
> Geoff
> 
> 


Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-06 Thread Sheng Zha
Hi Konstantin,

Thanks for your reply.

> I personally prefer small incremental changes, and starting with some MVP, 
> where M is
> minimal, meaning least possible effort (e.g. number of dependencies) used
> from conan.

Agreed. That said, this seems like a case where adoption decision can't be 
based on MVP, as having one additional dependency to be able to automatically 
download 3 out of some 15 dependencies doesn't seem to be a desirable state to 
be in.
A feature branch is created for you and others for collaboration so that we can 
not only ensure the coverage but also ensure that there's enough people who are 
willing to push this forward.

> may we define an actual scope then

As mentioned in my last reply, I'd recommend solving one of the two main use 
cases of mxnet builds. The efforts should also make it possible to remove these 
scripts here: https://github.com/apache/incubator-mxnet/tree/master/setup-utils.

> but, I am not sure, for me it looks like strange use-case. I can imagine
> you have developers with no internet connection and send them parcels with
> CDs of mxnet source code, but I think it's hard to manage such workflow
> with any tool, no matter submodules, CMake download or conan.

Thanks for the clarificiation. This use case is acutally not that rare. People 
may need to build mxnet to optimize for performance on their custom hardware, 
in a sandboxed environment for security reasons. Submodules can solve it by 
including the source code. Since the offline use case is not what conan is 
designed for, we just need to make sure that conan is not a required build 
tool, and alternatives still work in this use case. I made relevant comment in 
the PR too.

To sum up, I think conan is a tool that can simplify the dependency management 
in mxnet from a concept level. We should make sure that it can provide the 
coverage needed (e.g. have the packages we need with the versions we need and 
fast turnaround for upgrading dependencies), and make sure it doesn't break 
other build needs. For now let's utilize the feature branch to collaborate with 
people who share the interest and are willing to help out with the goal to make 
sure we have a full coverage at the time of adoption.

Best,
-sz

On 2019/05/06 08:10:29, Konstantin Ivlev  wrote: 
> Hi Sheng Zha,
> 
> >  Currently, the linked PR only includes OpenBLAS
> actually, it's OpenBLAS + OpenCV + lapack, three libraries as
> proof-of-concept
> 
> > A proof-of-concept that shows it actually replaces more dependency than
> openblas would be helpful
> may we define an actual scope then, e.g. how much dependencies - 2, 3, 5,
> half of them or all of them?
> 
> > If the value proposition of conan is to simplify the dependency
> management, then it should unify other solutions instead of keeping these
> solutions around.
> personally, I don't think it's easy to do in single shot. I personally
> prefer small incremental changes, and starting with some MVP, where M is
> minimal, meaning least possible effort (e.g. number of dependencies) used
> from conan. then eventually migrate other dependencies to conan one by one,
> until it fully migrates. but that's just my personal vision, if you see
> that this strategy is wrong, it's okay.
> 
> - It's unclear how it impacts people with only an archive of mxnet source
> code but without network access.
> as I have tried, conan doesn't change that much. if you have no network
> access, and only mxnet source code, then "git submodule init" will fail for
> you. if you have also archives of dependencies somewhere on your
> hard-drive, you can manage submodules to clone from the local repository
> rather GitHub. but then you will stuck at the CMake generation step, which
> will fail to download Intel MKL.
> if you use conan instead of submodules and CMake download, there is not
> much difference - it will fail to fetch same dependencies without internet
> connection on clean machine. if you somehow happen to have ab archive of
> conan cache on your hard-drive, you may unpack it and use without internet
> connection.
> but, I am not sure, for me it looks like strange use-case. I can imagine
> you have developers with no internet connection and send them parcels with
> CDs of mxnet source code, but I think it's hard to manage such workflow
> with any tool, no matter submodules, CMake download or conan.
> 
> yours sincerely, Konstantin
> 
> 
> вс, 5 мая 2019 г. в 06:25, Sheng Zha :
> 
> > To be clear, my intention is really to prevent a seemingly good solution
> > to exacerbate the problem that it sets out to solve. This tends to happen
> > when there are not enough people to drive it to the end.
> >
> > If there are additional values in this solution that people feel outweighs
> > the problems below, I'd be more than happy to be persuaded to vote
> > otherwise.
> >
> > -sz
> >
> > On 2019/05/04 23:08:43, Sheng Zha  wrote:
> > > Thank you for the explanation and sorry that I missed the earlier
> > context as it 

RE: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-06 Thread Lv, Tao A
Thank you Kellen. Great to hear that~

-Original Message-
From: kellen sunderland [mailto:kellen.sunderl...@gmail.com] 
Sent: Monday, May 6, 2019 3:27 PM
To: dev@mxnet.incubator.apache.org
Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

Hey Tao.  Fixed my problem by recompiling cmake with ssl support (so basically 
just a problem on my end).  After that MKL downloaded correctly, and everything 
compiled correctly with Anirudh's build flags.

On Sun, May 5, 2019 at 6:59 PM Lv, Tao A  wrote:

> Hi Kellen, does the problem still exist for you? I just built mxnet
> 1.4.1rc0 + mkldnn from source with cmake on my centos machine and 
> everything works well:
>
> -- Downloading MKLML...
> -- [download 0% complete]
> ...
> -- [download 100% complete]
> -- Setting MKLROOT path to
> /home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20
> 180928
> -- CMAKE_BUILD_TYPE is unset, defaulting to Release
> -- Detecting Intel(R) MKL: trying mklml_intel
> -- Intel(R) MKL: include
> /home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20
> 180928/include
> -- Intel(R) MKL: lib
> /home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20
> 180928/lib/libmklml_intel.so
>
> Thank you Junru for managing this release. We also verified MKL-DNN 
> related tests, convergence, quantization and FP32/INT8 performance. 
> They all look good to me.
>
> -tao
>
> -Original Message-
> From: kellen sunderland [mailto:kellen.sunderl...@gmail.com]
> Sent: Monday, May 6, 2019 3:20 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 
> 1.4.1.rc0
>
> I gave checking a shot but am now getting
>
> -- Downloading MKLML...
> CMake Error at cmake/DownloadMKLML.cmake:62 (file):
>   file DOWNLOAD HASH mismatch
>
> Assuming that's a transient mkl dep download error, I'll try again later.
>
>
> On Sat, May 4, 2019 at 12:09 AM Junru Shao 
> wrote:
>
> > Thank you Anirudh for your quick response! I will change the result 
> > accordingly :-)
> >
> > On Fri, May 3, 2019 at 11:58 PM Anirudh Subramanian 
> >  > >
> > wrote:
> >
> > > No worries, maybe its just something with my setup.
> > > Moving my vote to +0, pending someone else check.
> > >
> > > On Fri, May 3, 2019 at 11:39 PM Junru Shao 
> > > 
> > > wrote:
> > >
> > > > Hi Anirudh,
> > > >
> > > > Thanks for reporting this!
> > > >
> > > > I verified on my EC2 machine for the second time. It perfectly 
> > > > builds
> > > with
> > > > your commands. It is a bit weird...I noticed that there is a 
> > > > subtle difference that my ninja progress bar is like 
> > > > "[xxx/506]", while yours
> > is
> > > > "[xxx/488]". I am not sure if there is anything different 
> > > > between our settings.
> > > >
> > > > My understanding is that cmake should work because it is tested 
> > > > in our
> > CI
> > > > system under "ci/jenkins/incubator-mxnet" (
> > > >
> > > >
> > >
> > http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/inc
> > ub ator-mxnet/detail/v1.4.x/201/pipeline
> > > > ).
> > > >
> > > > It will be much appreciated if someone could help confirm 
> > > > whether cmake works on their side.
> > > >
> > > > Thanks,
> > > > Junru
> > > >
> > > >
> > > > On Fri, May 3, 2019 at 9:43 PM Anirudh Subramanian <
> > > anirudh2...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Junru,
> > > > >
> > > > > I am on v1.4.x , and my dmlc-core commit is this one :
> > > > >
> > > > >
> > > >
> > >
> > https://github.com/dmlc/dmlc-core/tree/0a0e8addf92e1287fd7a25c631401
> > 6b
> > 8c0138dee
> > > > >
> > > > > Anirudh
> > > > >
> > > > > On Fri, May 3, 2019 at 8:30 PM Junru Shao 
> > > > > 
> > > > wrote:
> > > > >
> > > > > > Hey Anirudh,
> > > > > >
> > > > > > Although the vote has been closed, I am very interested in 
> > > > > > digging
> > > into
> > > > > > this issue.
> > > > > >
> > > > > > I build on my EC2 machine using your instructions, and 

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-06 Thread Konstantin Ivlev
Hi Sheng Zha,

>  Currently, the linked PR only includes OpenBLAS
actually, it's OpenBLAS + OpenCV + lapack, three libraries as
proof-of-concept

> A proof-of-concept that shows it actually replaces more dependency than
openblas would be helpful
may we define an actual scope then, e.g. how much dependencies - 2, 3, 5,
half of them or all of them?

> If the value proposition of conan is to simplify the dependency
management, then it should unify other solutions instead of keeping these
solutions around.
personally, I don't think it's easy to do in single shot. I personally
prefer small incremental changes, and starting with some MVP, where M is
minimal, meaning least possible effort (e.g. number of dependencies) used
from conan. then eventually migrate other dependencies to conan one by one,
until it fully migrates. but that's just my personal vision, if you see
that this strategy is wrong, it's okay.

- It's unclear how it impacts people with only an archive of mxnet source
code but without network access.
as I have tried, conan doesn't change that much. if you have no network
access, and only mxnet source code, then "git submodule init" will fail for
you. if you have also archives of dependencies somewhere on your
hard-drive, you can manage submodules to clone from the local repository
rather GitHub. but then you will stuck at the CMake generation step, which
will fail to download Intel MKL.
if you use conan instead of submodules and CMake download, there is not
much difference - it will fail to fetch same dependencies without internet
connection on clean machine. if you somehow happen to have ab archive of
conan cache on your hard-drive, you may unpack it and use without internet
connection.
but, I am not sure, for me it looks like strange use-case. I can imagine
you have developers with no internet connection and send them parcels with
CDs of mxnet source code, but I think it's hard to manage such workflow
with any tool, no matter submodules, CMake download or conan.

yours sincerely, Konstantin


вс, 5 мая 2019 г. в 06:25, Sheng Zha :

> To be clear, my intention is really to prevent a seemingly good solution
> to exacerbate the problem that it sets out to solve. This tends to happen
> when there are not enough people to drive it to the end.
>
> If there are additional values in this solution that people feel outweighs
> the problems below, I'd be more than happy to be persuaded to vote
> otherwise.
>
> -sz
>
> On 2019/05/04 23:08:43, Sheng Zha  wrote:
> > Thank you for the explanation and sorry that I missed the earlier
> context as it has been a while. While I like the idea of simplifying the
> dependency management with tools like conan, I have the following concerns
> on this vote as-is (it's also my take on why I think the PR is stuck):
> >
> > - It's unclear how much dependency needs can conan help in mxnet builds.
> >   Currently, the linked PR only includes OpenBLAS. A proof-of-concept
> that shows it actually replaces more dependency than openblas would be
> helpful. On a high-level, there are two types of builds for mxnet:
> >   * User's custom build-from-source: 1) usually dynamic linking is used.
> 2) users may not enable all features, and users may want to pull a subset
> of the dependencies. 3) users may want mxnet build system to pull the
> dependencies, or they may not. (for conan it's ok to just focus on the
> former)
> >   * Binary distribution for pip and maven: 1) static linking is used
> (requires -fPIC). 2) all features are enabled. 3) dependencies are pulled
> in with scripts in mxnet.
> >   Handling one of the above cases would be a good showcase for the value
> of the new tool.
> >
> > - It's unclear how it impacts people with only an archive of mxnet
> source code but without network access.
> >   This applies for the dependencies that are captured as submodules that
> you mentioned as a way that mxnet manages dependency.
> >
> > - If the value proposition of conan is to simplify the dependency
> management, then it should unify other solutions instead of keeping these
> solutions around.
> >
> > Overall, it would be helpful to have a clear message such as what
> exactly conan can replace, and having a proof of concept that works for
> this would be helpful. Otherwise, I fear that we may be introducing yet
> another way to manage dependency that further complicates the existing
> problem.
> >
> > That said, I'm not suggesting that we impose the burden to implement
> everything on you alone, and it's ok to rally people who are interested in
> this solution to help out. To facilitate this, I created a feature branch
> so that it's easier for you and people who are enthusiastic about this to
> work together [1].
> >
> > For now, I'm voting -1 to this proposal and I hope you understand.
> >
> > -sz
> >
> > [1] https://github.com/apache/incubator-mxnet/tree/conan
> >
> > On 2019/05/03 07:51:34, Konstantin Ivlev  wrote:
> > > hi Sheng Zha,
> > >
> > > on pull request review I was 

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-06 Thread kellen sunderland
Hey Tao.  Fixed my problem by recompiling cmake with ssl support (so
basically just a problem on my end).  After that MKL downloaded correctly,
and everything compiled correctly with Anirudh's build flags.

On Sun, May 5, 2019 at 6:59 PM Lv, Tao A  wrote:

> Hi Kellen, does the problem still exist for you? I just built mxnet
> 1.4.1rc0 + mkldnn from source with cmake on my centos machine and
> everything works well:
>
> -- Downloading MKLML...
> -- [download 0% complete]
> ...
> -- [download 100% complete]
> -- Setting MKLROOT path to
> /home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20180928
> -- CMAKE_BUILD_TYPE is unset, defaulting to Release
> -- Detecting Intel(R) MKL: trying mklml_intel
> -- Intel(R) MKL: include
> /home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20180928/include
> -- Intel(R) MKL: lib
> /home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20180928/lib/libmklml_intel.so
>
> Thank you Junru for managing this release. We also verified MKL-DNN
> related tests, convergence, quantization and FP32/INT8 performance. They
> all look good to me.
>
> -tao
>
> -Original Message-
> From: kellen sunderland [mailto:kellen.sunderl...@gmail.com]
> Sent: Monday, May 6, 2019 3:20 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0
>
> I gave checking a shot but am now getting
>
> -- Downloading MKLML...
> CMake Error at cmake/DownloadMKLML.cmake:62 (file):
>   file DOWNLOAD HASH mismatch
>
> Assuming that's a transient mkl dep download error, I'll try again later.
>
>
> On Sat, May 4, 2019 at 12:09 AM Junru Shao 
> wrote:
>
> > Thank you Anirudh for your quick response! I will change the result
> > accordingly :-)
> >
> > On Fri, May 3, 2019 at 11:58 PM Anirudh Subramanian
> >  > >
> > wrote:
> >
> > > No worries, maybe its just something with my setup.
> > > Moving my vote to +0, pending someone else check.
> > >
> > > On Fri, May 3, 2019 at 11:39 PM Junru Shao 
> > > wrote:
> > >
> > > > Hi Anirudh,
> > > >
> > > > Thanks for reporting this!
> > > >
> > > > I verified on my EC2 machine for the second time. It perfectly
> > > > builds
> > > with
> > > > your commands. It is a bit weird...I noticed that there is a
> > > > subtle difference that my ninja progress bar is like "[xxx/506]",
> > > > while yours
> > is
> > > > "[xxx/488]". I am not sure if there is anything different between
> > > > our settings.
> > > >
> > > > My understanding is that cmake should work because it is tested in
> > > > our
> > CI
> > > > system under "ci/jenkins/incubator-mxnet" (
> > > >
> > > >
> > >
> > http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incub
> > ator-mxnet/detail/v1.4.x/201/pipeline
> > > > ).
> > > >
> > > > It will be much appreciated if someone could help confirm whether
> > > > cmake works on their side.
> > > >
> > > > Thanks,
> > > > Junru
> > > >
> > > >
> > > > On Fri, May 3, 2019 at 9:43 PM Anirudh Subramanian <
> > > anirudh2...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Junru,
> > > > >
> > > > > I am on v1.4.x , and my dmlc-core commit is this one :
> > > > >
> > > > >
> > > >
> > >
> > https://github.com/dmlc/dmlc-core/tree/0a0e8addf92e1287fd7a25c6314016b
> > 8c0138dee
> > > > >
> > > > > Anirudh
> > > > >
> > > > > On Fri, May 3, 2019 at 8:30 PM Junru Shao
> > > > > 
> > > > wrote:
> > > > >
> > > > > > Hey Anirudh,
> > > > > >
> > > > > > Although the vote has been closed, I am very interested in
> > > > > > digging
> > > into
> > > > > > this issue.
> > > > > >
> > > > > > I build on my EC2 machine using your instructions, and it
> > > > > > seems
> > that
> > > > > > everything is working fine...
> > > > > >
> > > > > > Then, I noticed that your issue seems to be related to
> > > > > > unittests in dmlc-core, not in mxnet. Could you kindly check
> > > 

RE: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-05 Thread Lv, Tao A
Hi Kellen, does the problem still exist for you? I just built mxnet 1.4.1rc0 + 
mkldnn from source with cmake on my centos machine and everything works well:

-- Downloading MKLML...
-- [download 0% complete]
...
-- [download 100% complete]
-- Setting MKLROOT path to 
/home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20180928
-- CMAKE_BUILD_TYPE is unset, defaulting to Release
-- Detecting Intel(R) MKL: trying mklml_intel
-- Intel(R) MKL: include 
/home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20180928/include
-- Intel(R) MKL: lib 
/home/lvtao/Workspace/mxnet-official/build/mklml/mklml_lnx_2019.0.1.20180928/lib/libmklml_intel.so

Thank you Junru for managing this release. We also verified MKL-DNN related 
tests, convergence, quantization and FP32/INT8 performance. They all look good 
to me.

-tao

-Original Message-
From: kellen sunderland [mailto:kellen.sunderl...@gmail.com] 
Sent: Monday, May 6, 2019 3:20 AM
To: dev@mxnet.incubator.apache.org
Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

I gave checking a shot but am now getting

-- Downloading MKLML...
CMake Error at cmake/DownloadMKLML.cmake:62 (file):
  file DOWNLOAD HASH mismatch

Assuming that's a transient mkl dep download error, I'll try again later.


On Sat, May 4, 2019 at 12:09 AM Junru Shao  wrote:

> Thank you Anirudh for your quick response! I will change the result 
> accordingly :-)
>
> On Fri, May 3, 2019 at 11:58 PM Anirudh Subramanian 
>  >
> wrote:
>
> > No worries, maybe its just something with my setup.
> > Moving my vote to +0, pending someone else check.
> >
> > On Fri, May 3, 2019 at 11:39 PM Junru Shao 
> > wrote:
> >
> > > Hi Anirudh,
> > >
> > > Thanks for reporting this!
> > >
> > > I verified on my EC2 machine for the second time. It perfectly 
> > > builds
> > with
> > > your commands. It is a bit weird...I noticed that there is a 
> > > subtle difference that my ninja progress bar is like "[xxx/506]", 
> > > while yours
> is
> > > "[xxx/488]". I am not sure if there is anything different between 
> > > our settings.
> > >
> > > My understanding is that cmake should work because it is tested in 
> > > our
> CI
> > > system under "ci/jenkins/incubator-mxnet" (
> > >
> > >
> >
> http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incub
> ator-mxnet/detail/v1.4.x/201/pipeline
> > > ).
> > >
> > > It will be much appreciated if someone could help confirm whether 
> > > cmake works on their side.
> > >
> > > Thanks,
> > > Junru
> > >
> > >
> > > On Fri, May 3, 2019 at 9:43 PM Anirudh Subramanian <
> > anirudh2...@gmail.com>
> > > wrote:
> > >
> > > > Hi Junru,
> > > >
> > > > I am on v1.4.x , and my dmlc-core commit is this one :
> > > >
> > > >
> > >
> >
> https://github.com/dmlc/dmlc-core/tree/0a0e8addf92e1287fd7a25c6314016b
> 8c0138dee
> > > >
> > > > Anirudh
> > > >
> > > > On Fri, May 3, 2019 at 8:30 PM Junru Shao 
> > > > 
> > > wrote:
> > > >
> > > > > Hey Anirudh,
> > > > >
> > > > > Although the vote has been closed, I am very interested in 
> > > > > digging
> > into
> > > > > this issue.
> > > > >
> > > > > I build on my EC2 machine using your instructions, and it 
> > > > > seems
> that
> > > > > everything is working fine...
> > > > >
> > > > > Then, I noticed that your issue seems to be related to 
> > > > > unittests in dmlc-core, not in mxnet. Could you kindly check 
> > > > > the submodule git
> > hash?
> > > > > Also, could you check if you are testing on v1.4.x branch?
> > > > >
> > > > > Thanks,
> > > > > Junru
> > > > >
> > > > >
> > > > >
> > > > > On Fri, May 3, 2019 at 4:33 PM Anirudh Subramanian <
> > > > anirudh2...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > -1 (binding)
> > > > > >
> > > > > > Is the cmake build failing for the 1.4.1 release tag ? Is 
> > > > > > this a
> > > known
> > > > > > issue ?
> > > > > >
> > > > > > Did the

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-05 Thread kellen sunderland
am.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_array_view.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_any.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_config.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_serializer.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer_exc_handling.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_inputsplit.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_json.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
> > > > > > -o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
> > > > > > lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
> > > > > > FAILED: : && /usr/bin/c++   -Wall -Wno-unknown-pragmas -fPIC -g
> -O0
> > > > > -msse2
> > > > > > -std=c++11 -fopenmp -g  -pthread
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_lockfree.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_param.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_array_view.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_any.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_config.cc.o
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dm

Re: apply for slack channel

2019-05-05 Thread Tao Lv
Hi Shuchun,

Welcome to the MXNet community!

Invite is sent. You will be added to the 'general' channel of ASF and you
can search for the 'mxnet' channel.

-tao

On Sun, May 5, 2019 at 5:38 PM shuchun liu  wrote:

> thanks.
>
> --
> --
> --
> Best Regards,
> Liu Shuchun techgo.io
> Address: 上海市浦东新区张江金科路2889弄长泰广场A座
> Tel: +86 13524123160
>


Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-04 Thread Sheng Zha
To be clear, my intention is really to prevent a seemingly good solution to 
exacerbate the problem that it sets out to solve. This tends to happen when 
there are not enough people to drive it to the end.

If there are additional values in this solution that people feel outweighs the 
problems below, I'd be more than happy to be persuaded to vote otherwise.

-sz

On 2019/05/04 23:08:43, Sheng Zha  wrote: 
> Thank you for the explanation and sorry that I missed the earlier context as 
> it has been a while. While I like the idea of simplifying the dependency 
> management with tools like conan, I have the following concerns on this vote 
> as-is (it's also my take on why I think the PR is stuck):
> 
> - It's unclear how much dependency needs can conan help in mxnet builds.
>   Currently, the linked PR only includes OpenBLAS. A proof-of-concept that 
> shows it actually replaces more dependency than openblas would be helpful. On 
> a high-level, there are two types of builds for mxnet:
>   * User's custom build-from-source: 1) usually dynamic linking is used. 2) 
> users may not enable all features, and users may want to pull a subset of the 
> dependencies. 3) users may want mxnet build system to pull the dependencies, 
> or they may not. (for conan it's ok to just focus on the former)
>   * Binary distribution for pip and maven: 1) static linking is used 
> (requires -fPIC). 2) all features are enabled. 3) dependencies are pulled in 
> with scripts in mxnet.
>   Handling one of the above cases would be a good showcase for the value of 
> the new tool.
> 
> - It's unclear how it impacts people with only an archive of mxnet source 
> code but without network access.
>   This applies for the dependencies that are captured as submodules that you 
> mentioned as a way that mxnet manages dependency.
> 
> - If the value proposition of conan is to simplify the dependency management, 
> then it should unify other solutions instead of keeping these solutions 
> around.
> 
> Overall, it would be helpful to have a clear message such as what exactly 
> conan can replace, and having a proof of concept that works for this would be 
> helpful. Otherwise, I fear that we may be introducing yet another way to 
> manage dependency that further complicates the existing problem.
> 
> That said, I'm not suggesting that we impose the burden to implement 
> everything on you alone, and it's ok to rally people who are interested in 
> this solution to help out. To facilitate this, I created a feature branch so 
> that it's easier for you and people who are enthusiastic about this to work 
> together [1].
> 
> For now, I'm voting -1 to this proposal and I hope you understand.
> 
> -sz
> 
> [1] https://github.com/apache/incubator-mxnet/tree/conan
> 
> On 2019/05/03 07:51:34, Konstantin Ivlev  wrote: 
> > hi Sheng Zha,
> > 
> > on pull request review I was told by Anirudh anirudhacharya and Roshani
> > Nagmote to start discussion/vote on the mxnet dev list. it seems to be a
> > vicious circle now - on GitHub I am told to use vote, and on vote I am told
> > to use GitHub, this doesn't help much.
> > FYI GitHub review stuck, it's already opened since November 2018, and it's
> > still not approved (however, there were no objections during the review).
> > Previous discussion in e-mail thread also didn't encounter any objections,
> > and all questions were answered.
> > JIRA ticket has no discussion at all (except it has duplicates of comments
> > made on GitHub).
> > so let's process with 3-day vote for now, as other communication channels
> > were already tried with no success.
> > 
> > yours sincerely, Konstantin
> > 
> > пт, 3 мая 2019 г. в 14:17, Sheng Zha :
> > 
> > > Hi Konstantin,
> > >
> > > While conan looks like an option that's worth exploring, given that your
> > > request is to merge the pull request, I'd suggest that the request should
> > > go through the regular pull request review and it doesn't really need a
> > > vote (as it doesn't substitute reviews anyway)
> > >
> > > If you would like to gather more attention to it, feel free to ping in a
> > > discussion thread.
> > >
> > > -sz
> > >
> > > On 2019/05/03 06:29:55, Konstantin Ivlev  wrote:
> > > > Dear MXNet community,
> > > >
> > > > This is the 3-day vote to add conan support for Apache MXNet 
> > > > (incubating)
> > > > version v1.4.1.
> > > > The voting on dev@ list will start May 03 23:59:59 (PST) and close on
> > > May
> > > > 06 23:59:59.
> > > >
> > > > Background: conan is open-source, freeware, cross-platform package
> > > manager
> > > > for C and C++ projects, written in python. it provides integration with
> > > > various build systems, include CMake. conan may use bintray as a server
> > > to
> > > > store and download pre-built packages, or packages might be always built
> > > > from sources.
> > > >
> > > > Problem: currently (as for v1.4.1), Apache MXNet (incubating) is using
> > > > several ways to fetch 3rd-party dependencies simultaneously, 

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-04 Thread Sheng Zha
Thank you for the explanation and sorry that I missed the earlier context as it 
has been a while. While I like the idea of simplifying the dependency 
management with tools like conan, I have the following concerns on this vote 
as-is (it's also my take on why I think the PR is stuck):

- It's unclear how much dependency needs can conan help in mxnet builds.
  Currently, the linked PR only includes OpenBLAS. A proof-of-concept that 
shows it actually replaces more dependency than openblas would be helpful. On a 
high-level, there are two types of builds for mxnet:
  * User's custom build-from-source: 1) usually dynamic linking is used. 2) 
users may not enable all features, and users may want to pull a subset of the 
dependencies. 3) users may want mxnet build system to pull the dependencies, or 
they may not. (for conan it's ok to just focus on the former)
  * Binary distribution for pip and maven: 1) static linking is used (requires 
-fPIC). 2) all features are enabled. 3) dependencies are pulled in with scripts 
in mxnet.
  Handling one of the above cases would be a good showcase for the value of the 
new tool.

- It's unclear how it impacts people with only an archive of mxnet source code 
but without network access.
  This applies for the dependencies that are captured as submodules that you 
mentioned as a way that mxnet manages dependency.

- If the value proposition of conan is to simplify the dependency management, 
then it should unify other solutions instead of keeping these solutions around.

Overall, it would be helpful to have a clear message such as what exactly conan 
can replace, and having a proof of concept that works for this would be 
helpful. Otherwise, I fear that we may be introducing yet another way to manage 
dependency that further complicates the existing problem.

That said, I'm not suggesting that we impose the burden to implement everything 
on you alone, and it's ok to rally people who are interested in this solution 
to help out. To facilitate this, I created a feature branch so that it's easier 
for you and people who are enthusiastic about this to work together [1].

For now, I'm voting -1 to this proposal and I hope you understand.

-sz

[1] https://github.com/apache/incubator-mxnet/tree/conan

On 2019/05/03 07:51:34, Konstantin Ivlev  wrote: 
> hi Sheng Zha,
> 
> on pull request review I was told by Anirudh anirudhacharya and Roshani
> Nagmote to start discussion/vote on the mxnet dev list. it seems to be a
> vicious circle now - on GitHub I am told to use vote, and on vote I am told
> to use GitHub, this doesn't help much.
> FYI GitHub review stuck, it's already opened since November 2018, and it's
> still not approved (however, there were no objections during the review).
> Previous discussion in e-mail thread also didn't encounter any objections,
> and all questions were answered.
> JIRA ticket has no discussion at all (except it has duplicates of comments
> made on GitHub).
> so let's process with 3-day vote for now, as other communication channels
> were already tried with no success.
> 
> yours sincerely, Konstantin
> 
> пт, 3 мая 2019 г. в 14:17, Sheng Zha :
> 
> > Hi Konstantin,
> >
> > While conan looks like an option that's worth exploring, given that your
> > request is to merge the pull request, I'd suggest that the request should
> > go through the regular pull request review and it doesn't really need a
> > vote (as it doesn't substitute reviews anyway)
> >
> > If you would like to gather more attention to it, feel free to ping in a
> > discussion thread.
> >
> > -sz
> >
> > On 2019/05/03 06:29:55, Konstantin Ivlev  wrote:
> > > Dear MXNet community,
> > >
> > > This is the 3-day vote to add conan support for Apache MXNet (incubating)
> > > version v1.4.1.
> > > The voting on dev@ list will start May 03 23:59:59 (PST) and close on
> > May
> > > 06 23:59:59.
> > >
> > > Background: conan is open-source, freeware, cross-platform package
> > manager
> > > for C and C++ projects, written in python. it provides integration with
> > > various build systems, include CMake. conan may use bintray as a server
> > to
> > > store and download pre-built packages, or packages might be always built
> > > from sources.
> > >
> > > Problem: currently (as for v1.4.1), Apache MXNet (incubating) is using
> > > several ways to fetch 3rd-party dependencies simultaneously, for
> > instance:
> > > 1. download GitHub archives during the build
> > > - OpenBLAS
> > > - OpenCV
> > > 2. conda (alternative way to GitHub archives)
> > > 3. download from CMake
> > > - Intel Math Kernel Library (MKL)
> > > 4. Git submodules
> > > - cub
> > > - dlpack
> > > - dmlc-core
> > > - googletest
> > > - mkldnn
> > > - mshadow
> > > - onnx-tensorrt
> > > - openmp
> > > - ps-lite
> > > - tvm
> > > therefore, there are multiple places to look for 3rd parties, and its
> > hard
> > > to update them, as you need to remember or figure it out how to update a
> > > particular dependency to newer version, 

Re: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-04 Thread Junru Shao
As Anirudh changes his vote from -1 to 0 in the voting thread just now, the
vote results changes to 12 +1 votes (3 binding), 1 0 vote, and no -1 vote.

Thank you guys again for your hard work testing the release! I will start a
voting thread on general@.

Thanks,
Junru


On Fri, May 3, 2019 at 11:59 PM Junru Shao  wrote:

> Dear MXNet community,
>
> I'm happy to announce the results of the vote.
>
> This vote passes with 12 +1 votes (3 binding), no 0 votes, and 1 -1 vote.
> +1 votes
> * Sheng Zha / binding
> * Qing Lan / binding
> * Carin Meier / binding
> * Aaron Markham
> * Pedro Larroy
> * Lai Wei
> * Damien Stanton
> * Kellen Sunderland
> * Yuxi Hu
> * Joshua Z. Zhang
> * Philip Hyunsu Cho
> * Aston Zhang
>
> 0 votes
> * No votes
>
> -1 votes
> * Anirudh Subramanian
>
> Vote thread can be found here [1]. The list of members can be found here
> [2].
>
> I'll continue with the release process and the release announcement will
> follow in the next few days.
>
> Best regards,
> Junru Shao
>
> [1]
> https://lists.apache.org/thread.html/6c140f4c180c259dd1b7f4ecf36f2d083ed810cd68b37d7f635f5614@%3Cdev.mxnet.apache.org%3E
> [2] http://incubator.apache.org/projects/mxnet.html
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-04 Thread Junru Shao
t/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_serializer.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer_exc_handling.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_inputsplit.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_json.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
> > > > > -o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
> > > > > lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
> > > > > FAILED: : && /usr/bin/c++   -Wall -Wno-unknown-pragmas -fPIC -g -O0
> > > > -msse2
> > > > > -std=c++11 -fopenmp -g  -pthread
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_lockfree.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_param.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_array_view.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_any.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_config.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_serializer.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer_exc_handling.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_inputsplit.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_json.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
> > > > >
> > > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
> > > > >
> > > > >

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-04 Thread Anirudh Subramanian
s/dmlc_unit_tests.dir/unittest_json.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
> > > > -o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
> > > > lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
> > > > FAILED: : && /usr/bin/c++   -Wall -Wno-unknown-pragmas -fPIC -g -O0
> > > -msse2
> > > > -std=c++11 -fopenmp -g  -pthread
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_lockfree.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_param.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_array_view.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_any.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_config.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_serializer.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer_exc_handling.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_inputsplit.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_json.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
> > > > -o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
> > > > lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
> > > >
> > > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o:
> > > > In function `Logging_basics_Test::TestBody()':
> > > >
> > > >
> > >
> >
> /home/ubuntu/experimentals/master_mxnet/build/../3rdparty/dmlc-core/test/unittest/unittest_logging.cc:19:
> > > > undefined reference to `testing::internal::DeathTest::Create(char
> > const*,
> > > > testing::internal::RE const*, char const*, int,
> > > > testing::internal::DeathTest**)'
> > > > collect2: error: ld returned 1 exit status
> > > >
> > > > Anirudh
> > > >
> > > > On Fri, May 3, 2019 at 8:04 AM kellen sunderland <
> > > > kellen.sunderl...@gmail.com> wrote:
> > > >
> > > > > No problem Damien, glad to have you helping us validating the
> > release.
> > > > > Just wanted to make suer we have enough votes to pass the general
> > vote
> > > > (the
> > > > > next release step) and with Sheng I think we should.
> > > > >
> > > > > On Fri, May 3, 2019 at 7:52 AM Damien Stanton <
> &g

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-04 Thread Junru Shao
openmp -g  -pthread
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_lockfree.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_param.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_array_view.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_any.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_config.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_serializer.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer_exc_handling.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_inputsplit.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_json.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
> > > -o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
> > > lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
> > >
> > >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o:
> > > In function `Logging_basics_Test::TestBody()':
> > >
> > >
> >
> /home/ubuntu/experimentals/master_mxnet/build/../3rdparty/dmlc-core/test/unittest/unittest_logging.cc:19:
> > > undefined reference to `testing::internal::DeathTest::Create(char
> const*,
> > > testing::internal::RE const*, char const*, int,
> > > testing::internal::DeathTest**)'
> > > collect2: error: ld returned 1 exit status
> > >
> > > Anirudh
> > >
> > > On Fri, May 3, 2019 at 8:04 AM kellen sunderland <
> > > kellen.sunderl...@gmail.com> wrote:
> > >
> > > > No problem Damien, glad to have you helping us validating the
> release.
> > > > Just wanted to make suer we have enough votes to pass the general
> vote
> > > (the
> > > > next release step) and with Sheng I think we should.
> > > >
> > > > On Fri, May 3, 2019 at 7:52 AM Damien Stanton <
> > damien.stan...@gmail.com>
> > > > wrote:
> > > >
> > > > > Ah, I misunderstood the binding/non-binding distinction. I am not a
> > > PPMC
> > > > > member, so my vote is non-binding.
> > > > >
> > > > > Best,
> > > > > Damien
> > > > >
> > > > > On Fri, May 3, 2019 at 3:19 AM kellen sunderland <
> > > > > kellen.sunderl...@gmail.com> wrote:
> > > > >
> > > > > > Hi Junru could you give a quick summary of the binding /
> > non-binding
> > > > > votes.
> > > > > >
> > > > > > Damien just want to confirm, are you a member of the PPMC for
> > MXNet?
> > > > > > Usually committers or community members (like most of us) are
> > > > encouraged
> > > > > to
> > > > > > test and vote, but technically count as non-binding for releases.
> > > > > >
> > > > > > Sheng can we assume you're +1 on the release?
> > > > > >
> > > > > > On Fri, May 3, 2019 at 12:09 AM Junru Shao <
> > junrushao1...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi folks,
> > > > > > >
> > > > > > > So far 

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread Anirudh Subramanian
 >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
> >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
> >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
> >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
> > -o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
> > lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
> >
> >
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o:
> > In function `Logging_basics_Test::TestBody()':
> >
> >
> /home/ubuntu/experimentals/master_mxnet/build/../3rdparty/dmlc-core/test/unittest/unittest_logging.cc:19:
> > undefined reference to `testing::internal::DeathTest::Create(char const*,
> > testing::internal::RE const*, char const*, int,
> > testing::internal::DeathTest**)'
> > collect2: error: ld returned 1 exit status
> >
> > Anirudh
> >
> > On Fri, May 3, 2019 at 8:04 AM kellen sunderland <
> > kellen.sunderl...@gmail.com> wrote:
> >
> > > No problem Damien, glad to have you helping us validating the release.
> > > Just wanted to make suer we have enough votes to pass the general vote
> > (the
> > > next release step) and with Sheng I think we should.
> > >
> > > On Fri, May 3, 2019 at 7:52 AM Damien Stanton <
> damien.stan...@gmail.com>
> > > wrote:
> > >
> > > > Ah, I misunderstood the binding/non-binding distinction. I am not a
> > PPMC
> > > > member, so my vote is non-binding.
> > > >
> > > > Best,
> > > > Damien
> > > >
> > > > On Fri, May 3, 2019 at 3:19 AM kellen sunderland <
> > > > kellen.sunderl...@gmail.com> wrote:
> > > >
> > > > > Hi Junru could you give a quick summary of the binding /
> non-binding
> > > > votes.
> > > > >
> > > > > Damien just want to confirm, are you a member of the PPMC for
> MXNet?
> > > > > Usually committers or community members (like most of us) are
> > > encouraged
> > > > to
> > > > > test and vote, but technically count as non-binding for releases.
> > > > >
> > > > > Sheng can we assume you're +1 on the release?
> > > > >
> > > > > On Fri, May 3, 2019 at 12:09 AM Junru Shao <
> junrushao1...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi folks,
> > > > > >
> > > > > > So far we have collected enough binding votes. Thank you guys for
> > the
> > > > > hard
> > > > > > work testing the release!
> > > > > >
> > > > > > The vote on dev@ is closed on May 02 23:59:59 (PST). Next, we
> are
> > > > going
> > > > > to
> > > > > > vote for the Apache MXNet (incubating) release 1.4.1 on general@
> > > > > tomorrow,
> > > > > > which starts on May 3 2019, 23:59:59 PST, and ends on May 07
> 2019,
> > > > > 23:59:59
> > > > > > PST.
> > > > > >
> > > > > > Best,
> > > > > > Junru
> > > > > >
> > > > > > On Thu, May 2, 2019 at 11:29 PM Aston Zhang <
> astonlzh...@gmail.com
> > >
> > > > > wrote:
> > > > > >
> > > > > > > +1 (non-binding)
> > > > > > >
> > > > > > > Passed all the code at zh.d2l.ai
> > > > > > >
> > > > > > > On Thu, May 2, 2019 at 1:46 PM Joshua Z. Zhang <
> > > cheungc...@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > +1 (non-binding)
> > > > > > > >
> > > > > > > > Build from source with cuda/cudnn.
> > > > > > > >
> > > > > > > > - All tests passed
> > > > > > > > - GluonCV unittest scripts passed
> > > > > > > > - GluonCV training scripts passed
> > > > > > > > - No issue with python multiprocessing
> > > > > > > >
> > > > > > > > Best,
> > > > > > > >

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread Junru Shao
Hey Anirudh,

Although the vote has been closed, I am very interested in digging into
this issue.

I build on my EC2 machine using your instructions, and it seems that
everything is working fine...

Then, I noticed that your issue seems to be related to unittests in
dmlc-core, not in mxnet. Could you kindly check the submodule git hash?
Also, could you check if you are testing on v1.4.x branch?

Thanks,
Junru



On Fri, May 3, 2019 at 4:33 PM Anirudh Subramanian 
wrote:

> -1 (binding)
>
> Is the cmake build failing for the 1.4.1 release tag ? Is this a known
> issue ?
>
> Did the following:
>
> cd build && cmake VERBOSE=1 -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_OPENMP=ON
> -DCMAKE_BUILD_TYPE=Debug -DUSE_DIST_KVSTORE=0 -DUSE_OPENCV=1
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DCUDNN_ROOT=/usr/local/cuda
> -DUSE_MKLDNN=1 -DUSE_MKL_IF_AVAILABLE=1 -DUSE_MKLML_MKL=1 -DUSE_ASAN=0
> -GNinja -DUSE_OPERATOR_TUNING=1 -DUSE_CPP_PACKAGE=0 -DCUDA_ARCH_NAME=Auto
> .. && ninja -v
>
> [272/488] : && /usr/bin/c++   -Wall -Wno-unknown-pragmas -fPIC -g -O0
> -msse2 -std=c++11 -fopenmp -g  -pthread
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_lockfree.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_param.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_array_view.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_any.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_config.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_serializer.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer_exc_handling.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_inputsplit.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_json.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
> -o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
> lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
> FAILED: : && /usr/bin/c++   -Wall -Wno-unknown-pragmas -fPIC -g -O0 -msse2
> -std=c++11 -fopenmp -g  -pthread
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_lockfree.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_param.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_array_view.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_any.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_config.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_serializer.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer_exc_handling.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_inputsplit.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_json.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
> -o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
> lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
>
> 3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o:
> In function `Logging_basics_Test::TestBody()':
>
> /home/ubuntu/experimentals/master_mxnet/build/../3rdparty/dmlc-core/test/unittest/unittest_logging.cc:19:
> undefined reference to `testing::internal::Death

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread Anirudh Subramanian
-1 (binding)

Is the cmake build failing for the 1.4.1 release tag ? Is this a known
issue ?

Did the following:

cd build && cmake VERBOSE=1 -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_OPENMP=ON
-DCMAKE_BUILD_TYPE=Debug -DUSE_DIST_KVSTORE=0 -DUSE_OPENCV=1
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DCUDNN_ROOT=/usr/local/cuda
-DUSE_MKLDNN=1 -DUSE_MKL_IF_AVAILABLE=1 -DUSE_MKLML_MKL=1 -DUSE_ASAN=0
-GNinja -DUSE_OPERATOR_TUNING=1 -DUSE_CPP_PACKAGE=0 -DCUDA_ARCH_NAME=Auto
.. && ninja -v

[272/488] : && /usr/bin/c++   -Wall -Wno-unknown-pragmas -fPIC -g -O0
-msse2 -std=c++11 -fopenmp -g  -pthread
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_lockfree.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_param.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_array_view.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_any.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_config.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_serializer.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer_exc_handling.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_inputsplit.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_json.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
-o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
FAILED: : && /usr/bin/c++   -Wall -Wno-unknown-pragmas -fPIC -g -O0 -msse2
-std=c++11 -fopenmp -g  -pthread
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_lockfree.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_param.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_parser.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_array_view.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_any.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_config.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_serializer.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_threaditer_exc_handling.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_inputsplit.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_json.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_optional.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_main.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_env.cc.o
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_thread_group.cc.o
-o 3rdparty/dmlc-core/test/unittest/dmlc_unit_tests  -rdynamic
lib/libgtestd.a 3rdparty/dmlc-core/libdmlc.a -lpthread && :
3rdparty/dmlc-core/test/unittest/CMakeFiles/dmlc_unit_tests.dir/unittest_logging.cc.o:
In function `Logging_basics_Test::TestBody()':
/home/ubuntu/experimentals/master_mxnet/build/../3rdparty/dmlc-core/test/unittest/unittest_logging.cc:19:
undefined reference to `testing::internal::DeathTest::Create(char const*,
testing::internal::RE const*, char const*, int,
testing::internal::DeathTest**)'
collect2: error: ld returned 1 exit status

Anirudh

On Fri, May 3, 2019 at 8:04 AM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> No problem Damien, glad to have you helping us validating the release.
> Just wanted to make suer we have enough votes to pass the general vote (the
> next release step) and with Sheng I think we should.
>
> On Fri, May 3, 2019 at 7:52 AM Damien Stanton 
> wrote:
>
> > Ah, I misunderstood the binding/non-binding distinction. I am not a PPMC
> > member, so my vote is non-binding.
> >
> > Best,
> > Damien
> >
> > On Fri, May 3, 2019 at 3:19 AM kellen sunderland <
> > kellen.sunderl...@gmail.com> wrote:
> >
> > > Hi Junru could you give a quick summary of the binding / non-binding

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-03 Thread Junru Shao
Hi Konstantin, Kellen,

Thank you guys for the very detailed explanation! I was lacking some
relevant contexts and previous discussions, which got me confused
previously.

My understanding that C++ is short of a perfect package manager. I totally
agree that for any C++ project, it would be great to try possible
opportunities to make our building experience smoother. This will largely
help improve the usability of any project in the C++ community.

TensorFlow uses Bazel officially, CMake supported unofficially. PyTorch
seems to use CMake as wellMy. My understanding is that it helps users from
the community to quickly pick up the building process. If we could help
customers (industrial or academic, internal or external) build stuff more
smoothly, it would be definitely a big plus.

It will be a good idea if we have something to try out or play with. For
example, instructions for building MXNet with Conan, customizing MXNet
build with Conan, etc. Considering mshadow is going to be merged into
mxnet, I believe it is a good opportunity to illustrate the power and
convenience of Conan.

Again, thank you guys for the valuable and patient explanation!

Thanks,
Junru

On Fri, May 3, 2019 at 10:14 AM Konstantin Ivlev 
wrote:

> Hi Junru,
>
> > I am actually a bit concerned about the security issues. We are asked to
> download binaries from third-party websites, which are not controlled or
> validated by Apache
> it's possible to run conan server inside apache network and download
> binaries and sources only from this remote
>
> > CMake does support download artifacts from a pre-defined website very
> well
> same for conan, it also supports downloading of artifacts from pre-defined
> web-sites
>
> > rather than broken links to nonsense
> please let me know which links are broken. I've checked them all, and they
> are all available for me. may be your ISP is blocking something?
>
> > most of the dependencies you mentioned adopts CMake, rather than Conan
> I don't think it's relevant to compare CMake (meta-build-system) and Conan
> (package manager), they have different purposes, and they might be used
> simultaneously, or only one of them might be used, they aren't mutually
> exclusive.
>
> > Would you mind at least mentioning the benefits somewhere in this thread
> okay, let me list some benefits (not all of them, but something I have on
> mind)
> - cross-platform, runs on Windows, Linux, OSX, FreeBSD, etc., also supports
> cross-compiling for various platforms, like iOS, Android, Emscripten,
> Windows Phone, etc.
> - open-source (MIT licensed) and freeware
> - supports both build from sources and pre-built artifacts (for instance,
> if compare to usage of submodules, you probably wouldn't use pre-built
> binaries in that case, as repo size will grow)
> - supports multiple versions of libraries (some package managers provide
> only "latest")
> - supports options of libraries (whatever you need is configurable per
> library, e.g. shared vs static, with or without assembler, multi-threaded
> vs single-threaded, etc)
> - supports installation of build tools as well (something like bison, flex,
> nasm, yasm, etc.)
> - has concept of profiles and their management (include options, tools and
> environment variables to be used for build)
> - supports various build systems (CMake, Meson, MSBuild, boost build,
> etc.), pretty much build system agnostic
> - decentralized, you may use in-house server (e.g. conan server or
> artifactory), or bintray
> - extensible via hooks
> there are some comparisons of package managers on reddit (e.g.
> https://www.reddit.com/r/cpp/comments/9m4l0p/conan_vcpkg_or_build2/ and
>
> https://www.reddit.com/r/cpp/comments/8ldhu0/opinions_on_the_conan_package_manager/
> ).
>
> > and eagerly asking for avote? I believe that reasonable discussion would
> keep us within a *healthy* discussion.
> I am asking for a vote because I was told to do this on GitHub discussion
> by MXNet developers. I've already started discussions using all suggested
> channels (GitHub, JIRA and this mail list). all questions were answered,
> and no further questions appeared until today. I was thinking we already
> have passed the discussion stage, as all discussion have stalled with no
> objections (nobody clearly said something like "we're going to adopt conan"
> or "we will not use conan definetely"). so my impression was discussion
> stage was already passed, and now it's time for the decision. sorry if I
> had wrong impression, I am not really very familiar with your processes.
>
> > why not I simply specify versions in git submidule, which everyone
> understands how it behaves
> I haven't said usage of submodules itself is something bad, if you're fine
> with submodules, it's okay, but it seems like for MXNet they don't fit all
> use-cases, as some dependencies are downloaded via other ways (as mentioned
> above, via conda, cmake or just downloaded from GitHub archives in CI
> scripts). this causes fragmentation and 

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-03 Thread Konstantin Ivlev
Hi Junru,

> I am actually a bit concerned about the security issues. We are asked to
download binaries from third-party websites, which are not controlled or
validated by Apache
it's possible to run conan server inside apache network and download
binaries and sources only from this remote

> CMake does support download artifacts from a pre-defined website very well
same for conan, it also supports downloading of artifacts from pre-defined
web-sites

> rather than broken links to nonsense
please let me know which links are broken. I've checked them all, and they
are all available for me. may be your ISP is blocking something?

> most of the dependencies you mentioned adopts CMake, rather than Conan
I don't think it's relevant to compare CMake (meta-build-system) and Conan
(package manager), they have different purposes, and they might be used
simultaneously, or only one of them might be used, they aren't mutually
exclusive.

> Would you mind at least mentioning the benefits somewhere in this thread
okay, let me list some benefits (not all of them, but something I have on
mind)
- cross-platform, runs on Windows, Linux, OSX, FreeBSD, etc., also supports
cross-compiling for various platforms, like iOS, Android, Emscripten,
Windows Phone, etc.
- open-source (MIT licensed) and freeware
- supports both build from sources and pre-built artifacts (for instance,
if compare to usage of submodules, you probably wouldn't use pre-built
binaries in that case, as repo size will grow)
- supports multiple versions of libraries (some package managers provide
only "latest")
- supports options of libraries (whatever you need is configurable per
library, e.g. shared vs static, with or without assembler, multi-threaded
vs single-threaded, etc)
- supports installation of build tools as well (something like bison, flex,
nasm, yasm, etc.)
- has concept of profiles and their management (include options, tools and
environment variables to be used for build)
- supports various build systems (CMake, Meson, MSBuild, boost build,
etc.), pretty much build system agnostic
- decentralized, you may use in-house server (e.g. conan server or
artifactory), or bintray
- extensible via hooks
there are some comparisons of package managers on reddit (e.g.
https://www.reddit.com/r/cpp/comments/9m4l0p/conan_vcpkg_or_build2/ and
https://www.reddit.com/r/cpp/comments/8ldhu0/opinions_on_the_conan_package_manager/
).

> and eagerly asking for avote? I believe that reasonable discussion would
keep us within a *healthy* discussion.
I am asking for a vote because I was told to do this on GitHub discussion
by MXNet developers. I've already started discussions using all suggested
channels (GitHub, JIRA and this mail list). all questions were answered,
and no further questions appeared until today. I was thinking we already
have passed the discussion stage, as all discussion have stalled with no
objections (nobody clearly said something like "we're going to adopt conan"
or "we will not use conan definetely"). so my impression was discussion
stage was already passed, and now it's time for the decision. sorry if I
had wrong impression, I am not really very familiar with your processes.

> why not I simply specify versions in git submidule, which everyone
understands how it behaves
I haven't said usage of submodules itself is something bad, if you're fine
with submodules, it's okay, but it seems like for MXNet they don't fit all
use-cases, as some dependencies are downloaded via other ways (as mentioned
above, via conda, cmake or just downloaded from GitHub archives in CI
scripts). this causes fragmentation and includes complexity. so, it's clear
that submodules somehow didn't satisfy all MXNet needs, as they aren't used
for everything.

> Everyone know how to include a sub-directory in cmake in one line
probably, because not all of your dependencies are using CMake to build, so
you can't simply include them into cmake in one line.

yours sincerely, Konstantin

пт, 3 мая 2019 г. в 23:10, Junru Shao :

> I am actually a bit concerned about the security issues. We are asked to
> download binaries from third-party websites, which are not controlled or
> validated by Apache. Although it is claimed to be “decentralized”, I am
> really not convinced where the security comes from.
>
> In the meantime, sacrificing security doesn’t really bring us tangible
> benefits. CMake does support download artifacts from a pre-defined website
> very well. We may also have pre-built binaries stored in our CI docker
> without having to download it them from the internet.
>
> Another point is that I am not convinced of any advantage of Conan over
> other package managers for C++. Would you mind at least mentioning the
> benefits somewhere in this thread, rather than carelessly includes tons of
> irrelevant links (some of which are even wrong) and eagerly asking for a
> vote? I believe that reasonable discussion would keep us within a *healthy*
> discussion.
>
> Last but not least, as 

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-03 Thread kellen sunderland
So firstly let's try to keep our responses empathetic and avoid ad-hom
comments.  It might be beneficial to take some time to review the Apache
Code of Conduct [1].  Konstantin has taken a lot of time to think about
dependency management in MXNet on a volunteer basis which is commendable.

Second, Junru, I share many of your security concerns, but my understanding
is that Conan.io allows you to pull dependencies as binaries, or as source
using the -build option, so we're not limited to strictly pulling binaries
from remote servers.

Some benefits I see:
* A uniform method of pulling dependencies is much easier to maintain and
reason about.  Need to update a package because of a security
vulnerability?  Go to the single place we configure dependencies and update
it.
* We have many dependencies that do not need to be checked out depending on
the build options a user desires (so called build conditional
dependencies).  There's not much point in downloading / checking out these
dependencies if you're not going to use them.
* Subrepo sources have to have certified license reviews every release.
Using a package manager would solve this issue.
* We have an extra user base (Conan.io users) who get exposure to MXNet,
growing our user base.

Many of these benefits we'd get with other package management systems.  One
option I had previously proposed was Hunter, which is basically a wrapper
around CMake's ExternalProject functionality.  The tradeoff I see between
the two is that Hunter (or ExternalProject via CMake) wold be lighter
weight but would have less support, a smaller community and would be hard
to use consistently across the project with the variety of collaborators it
has.

1: https://www.apache.org/foundation/policies/conduct.html.
2: https://github.com/ruslo/hunter

On Fri, May 3, 2019 at 9:10 AM Junru Shao  wrote:

> I am actually a bit concerned about the security issues. We are asked to
> download binaries from third-party websites, which are not controlled or
> validated by Apache. Although it is claimed to be “decentralized”, I am
> really not convinced where the security comes from.
>
> In the meantime, sacrificing security doesn’t really bring us tangible
> benefits. CMake does support download artifacts from a pre-defined website
> very well. We may also have pre-built binaries stored in our CI docker
> without having to download it them from the internet.
>
> Another point is that I am not convinced of any advantage of Conan over
> other package managers for C++. Would you mind at least mentioning the
> benefits somewhere in this thread, rather than carelessly includes tons of
> irrelevant links (some of which are even wrong) and eagerly asking for a
> vote? I believe that reasonable discussion would keep us within a *healthy*
> discussion.
>
> Last but not least, as we all know, most of the dependencies you mentioned
> adopts CMake, rather than Conan, the meta-generator which generates CMake.
> I didn’t see your logic stands like “oh you have tons of dependencies so
> you must use Conan”, why not I simply specify versions in git submidule,
> which everyone understands how it behaves? Everyone know how to include a
> sub-directory in cmake in one line, so why we have to write python to make
> it less understandable and more complicated?
>
> In conclusion, we need to be reasonable in healthy discussion. I don’t
> particularly want to rudely +1 or -1 for a thing that is unclear to me, but
> I really want to see pros and cons, discussion about issues and concerns,
> rather than broken links to nonsense.
>
> Looking forward to your reply!
>
> Thanks,
> Junru
>
> On Fri, May 3, 2019 at 08:05 kellen sunderland <
> kellen.sunderl...@gmail.com>
> wrote:
>
> > Hey Konstantin.  Thanks for starting an email thread and sorry for the
> > confusion.  I think the ides is that we should discuss and agree on
> > Conan.io adoption first on the dev list, then start merging PRs.  Release
> > 1.4.1 is already in testing and the 1.5 code freeze deadline is also near
> > so I think it could be difficult to make such a large change on one of
> > those releases.  I've looked into package management solutions for the
> > project before.  I was in favour of hunter, but I think Conan's adoption
> > rate now makes it the best option.  It's simple to use and is becoming
> > industry standard, with a minor downside of requiring Python (which has
> > meanwhile become the most popular dev language).  I'd personally be -1
> for
> > 1.4.1 or 1.5, +1 for using Conan in 1.6 or 2.0.
> >
> > -Kellen
> >
> > On Fri, May 3, 2019 at 12:59 AM Konstantin Ivlev 
> > wrote:
> >
> > > hi Sheng Zha,
> > >
> > > on pull request review I was told by Anirudh anirudhacharya and Roshani
> > > Nagmote to start discussion/vote on the mxnet dev list. it seems to be
> a
> > > vicious circle now - on GitHub I am told to use vote, and on vote I am
> > told
> > > to use GitHub, this doesn't help much.
> > > FYI GitHub review stuck, it's already opened 

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-03 Thread Junru Shao
I am actually a bit concerned about the security issues. We are asked to
download binaries from third-party websites, which are not controlled or
validated by Apache. Although it is claimed to be “decentralized”, I am
really not convinced where the security comes from.

In the meantime, sacrificing security doesn’t really bring us tangible
benefits. CMake does support download artifacts from a pre-defined website
very well. We may also have pre-built binaries stored in our CI docker
without having to download it them from the internet.

Another point is that I am not convinced of any advantage of Conan over
other package managers for C++. Would you mind at least mentioning the
benefits somewhere in this thread, rather than carelessly includes tons of
irrelevant links (some of which are even wrong) and eagerly asking for a
vote? I believe that reasonable discussion would keep us within a *healthy*
discussion.

Last but not least, as we all know, most of the dependencies you mentioned
adopts CMake, rather than Conan, the meta-generator which generates CMake.
I didn’t see your logic stands like “oh you have tons of dependencies so
you must use Conan”, why not I simply specify versions in git submidule,
which everyone understands how it behaves? Everyone know how to include a
sub-directory in cmake in one line, so why we have to write python to make
it less understandable and more complicated?

In conclusion, we need to be reasonable in healthy discussion. I don’t
particularly want to rudely +1 or -1 for a thing that is unclear to me, but
I really want to see pros and cons, discussion about issues and concerns,
rather than broken links to nonsense.

Looking forward to your reply!

Thanks,
Junru

On Fri, May 3, 2019 at 08:05 kellen sunderland 
wrote:

> Hey Konstantin.  Thanks for starting an email thread and sorry for the
> confusion.  I think the ides is that we should discuss and agree on
> Conan.io adoption first on the dev list, then start merging PRs.  Release
> 1.4.1 is already in testing and the 1.5 code freeze deadline is also near
> so I think it could be difficult to make such a large change on one of
> those releases.  I've looked into package management solutions for the
> project before.  I was in favour of hunter, but I think Conan's adoption
> rate now makes it the best option.  It's simple to use and is becoming
> industry standard, with a minor downside of requiring Python (which has
> meanwhile become the most popular dev language).  I'd personally be -1 for
> 1.4.1 or 1.5, +1 for using Conan in 1.6 or 2.0.
>
> -Kellen
>
> On Fri, May 3, 2019 at 12:59 AM Konstantin Ivlev 
> wrote:
>
> > hi Sheng Zha,
> >
> > on pull request review I was told by Anirudh anirudhacharya and Roshani
> > Nagmote to start discussion/vote on the mxnet dev list. it seems to be a
> > vicious circle now - on GitHub I am told to use vote, and on vote I am
> told
> > to use GitHub, this doesn't help much.
> > FYI GitHub review stuck, it's already opened since November 2018, and
> it's
> > still not approved (however, there were no objections during the review).
> > Previous discussion in e-mail thread also didn't encounter any
> objections,
> > and all questions were answered.
> > JIRA ticket has no discussion at all (except it has duplicates of
> comments
> > made on GitHub).
> > so let's process with 3-day vote for now, as other communication channels
> > were already tried with no success.
> >
> > yours sincerely, Konstantin
> >
> > пт, 3 мая 2019 г. в 14:17, Sheng Zha :
> >
> > > Hi Konstantin,
> > >
> > > While conan looks like an option that's worth exploring, given that
> your
> > > request is to merge the pull request, I'd suggest that the request
> should
> > > go through the regular pull request review and it doesn't really need a
> > > vote (as it doesn't substitute reviews anyway)
> > >
> > > If you would like to gather more attention to it, feel free to ping in
> a
> > > discussion thread.
> > >
> > > -sz
> > >
> > > On 2019/05/03 06:29:55, Konstantin Ivlev  wrote:
> > > > Dear MXNet community,
> > > >
> > > > This is the 3-day vote to add conan support for Apache MXNet
> > (incubating)
> > > > version v1.4.1.
> > > > The voting on dev@ list will start May 03 23:59:59 (PST) and close
> on
> > > May
> > > > 06 23:59:59.
> > > >
> > > > Background: conan is open-source, freeware, cross-platform package
> > > manager
> > > > for C and C++ projects, written in python. it provides integration
> with
> > > > various build systems, include CMake. conan may use bintray as a
> server
> > > to
> > > > store and download pre-built packages, or packages might be always
> > built
> > > > from sources.
> > > >
> > > > Problem: currently (as for v1.4.1), Apache MXNet (incubating) is
> using
> > > > several ways to fetch 3rd-party dependencies simultaneously, for
> > > instance:
> > > > 1. download GitHub archives during the build
> > > > - OpenBLAS
> > > > - OpenCV
> > > > 2. conda (alternative way to GitHub archives)
> > 

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-03 Thread kellen sunderland
Hey Konstantin.  Thanks for starting an email thread and sorry for the
confusion.  I think the ides is that we should discuss and agree on
Conan.io adoption first on the dev list, then start merging PRs.  Release
1.4.1 is already in testing and the 1.5 code freeze deadline is also near
so I think it could be difficult to make such a large change on one of
those releases.  I've looked into package management solutions for the
project before.  I was in favour of hunter, but I think Conan's adoption
rate now makes it the best option.  It's simple to use and is becoming
industry standard, with a minor downside of requiring Python (which has
meanwhile become the most popular dev language).  I'd personally be -1 for
1.4.1 or 1.5, +1 for using Conan in 1.6 or 2.0.

-Kellen

On Fri, May 3, 2019 at 12:59 AM Konstantin Ivlev 
wrote:

> hi Sheng Zha,
>
> on pull request review I was told by Anirudh anirudhacharya and Roshani
> Nagmote to start discussion/vote on the mxnet dev list. it seems to be a
> vicious circle now - on GitHub I am told to use vote, and on vote I am told
> to use GitHub, this doesn't help much.
> FYI GitHub review stuck, it's already opened since November 2018, and it's
> still not approved (however, there were no objections during the review).
> Previous discussion in e-mail thread also didn't encounter any objections,
> and all questions were answered.
> JIRA ticket has no discussion at all (except it has duplicates of comments
> made on GitHub).
> so let's process with 3-day vote for now, as other communication channels
> were already tried with no success.
>
> yours sincerely, Konstantin
>
> пт, 3 мая 2019 г. в 14:17, Sheng Zha :
>
> > Hi Konstantin,
> >
> > While conan looks like an option that's worth exploring, given that your
> > request is to merge the pull request, I'd suggest that the request should
> > go through the regular pull request review and it doesn't really need a
> > vote (as it doesn't substitute reviews anyway)
> >
> > If you would like to gather more attention to it, feel free to ping in a
> > discussion thread.
> >
> > -sz
> >
> > On 2019/05/03 06:29:55, Konstantin Ivlev  wrote:
> > > Dear MXNet community,
> > >
> > > This is the 3-day vote to add conan support for Apache MXNet
> (incubating)
> > > version v1.4.1.
> > > The voting on dev@ list will start May 03 23:59:59 (PST) and close on
> > May
> > > 06 23:59:59.
> > >
> > > Background: conan is open-source, freeware, cross-platform package
> > manager
> > > for C and C++ projects, written in python. it provides integration with
> > > various build systems, include CMake. conan may use bintray as a server
> > to
> > > store and download pre-built packages, or packages might be always
> built
> > > from sources.
> > >
> > > Problem: currently (as for v1.4.1), Apache MXNet (incubating) is using
> > > several ways to fetch 3rd-party dependencies simultaneously, for
> > instance:
> > > 1. download GitHub archives during the build
> > > - OpenBLAS
> > > - OpenCV
> > > 2. conda (alternative way to GitHub archives)
> > > 3. download from CMake
> > > - Intel Math Kernel Library (MKL)
> > > 4. Git submodules
> > > - cub
> > > - dlpack
> > > - dmlc-core
> > > - googletest
> > > - mkldnn
> > > - mshadow
> > > - onnx-tensorrt
> > > - openmp
> > > - ps-lite
> > > - tvm
> > > therefore, there are multiple places to look for 3rd parties, and its
> > hard
> > > to update them, as you need to remember or figure it out how to update
> a
> > > particular dependency to newer version, for instance.
> > > current Apache MXNet (incubating) build instructions differ very much
> per
> > > platform, require to download and unzip some archives manually,
> > specifying
> > > variables with paths to this archives, in conjunction of updating git
> > > submodules,
> > >
> > > Action: merge pull request providing an initial conan support for
> Apache
> > > MXNet (incubating). support conan as an alternate approach to fetch
> > various
> > > 3rd-party dependencies. old approaches will be still available,
> supported
> > > and left intact.
> > >
> > > Below are links to
> > > 1) conan web-site:  https://conan.io/
> > > 2) conan GitHub repository: https://github.com/conan-io/conan
> > > 3) conan documentation: https://docs.conan.io/en/latest/
> > > 4) bintray: https://bintray.com
> > > 5) pull request adding conan support to Apache MXNet (incubating):
> > > https://github.com/apache/incubator-mxnet/pull/13400
> > > 6) JIRA issue: https://issues.apache.org/jira/browse/MXNET-1229
> > > 7) previous email discussion:
> > >
> >
> https://lists.apache.org/thread.html/301a46a637f7e3c249c475713f701bef7530c32bc92d8834c0882897@%3Cdev.mxnet.apache.org%3E
> > > 8) MXNet build instructions:
> > > https://mxnet-tqchen.readthedocs.io/en/latest/how_to/build.html
> > > 9) MXNet build instructions (Windows):
> > >
> >
> https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html
> > > 10) MXNet build instructions (OSX):
> > >
> 

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread kellen sunderland
 1, 2019 at 7:06 PM Aaron Markham <
> > > > > >> aaron.s.mark...@gmail.com>
> > > > > >>>> wrote:
> > > > > >>>>
> > > > > >>>>> Make that +1 (non-binding)
> > > > > >>>>>
> > > > > >>>>> On Wed, May 1, 2019 at 3:42 PM Aaron Markham <
> > > > > >>> aaron.s.mark...@gmail.com>
> > > > > >>>>> wrote:
> > > > > >>>>>>
> > > > > >>>>>> +1 (binding)
> > > > > >>>>>>
> > > > > >>>>>> * Built with GPU and tested the first part of the ssd
> example.
> > > > > >>>>>> * Built with GPU / cross-compiled to arm8 for Jetson.
> > > > > >>>>>> * Built Scala/Java on top of the cross-compiled arm8 (ran
> into
> > > > > >>> trouble
> > > > > >>>>>> here, but I think this is not popular enough yet to derail
> > > things,
> > > > > >>>>>> plus there are workarounds)
> > > > > >>>>>> * Built on CPU instance and tested docs.
> > > > > >>>>>> http://34.201.8.176/versions/1.4.1/api/python/io/io.html
> > > > > >>>>>> I don't see anything specific being different in this patch
> > for
> > > > > >> docs,
> > > > > >>>>>> so hard to tell if there's an issue. I'll assume not given
> the
> > > > > >>>>>> successful generation of the API docs.
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> On Wed, May 1, 2019 at 1:28 PM Pedro Larroy
> > > > > >>>>>>  wrote:
> > > > > >>>>>>>
> > > > > >>>>>>> +1 (non-binding)
> > > > > >>>>>>>
> > > > > >>>>>>> Tried CPU build + C++ tests + 714 Python unit tests in
> 605s.
> > > > > >>>>>>> ARMv7 build + small unit test in QEMU + ARMv8 builds.
> > > > > >>>>>>>
> > > > > >>>>>>> Thanks. Regards
> > > > > >>>>>>>
> > > > > >>>>>>> Pedro.
> > > > > >>>>>>>
> > > > > >>>>>>> On Wed, May 1, 2019 at 10:41 AM Qing Lan <
> > lanking...@live.com>
> > > > > >>>> wrote:
> > > > > >>>>>>>>
> > > > > >>>>>>>> +1 (binding)
> > > > > >>>>>>>>
> > > > > >>>>>>>> build from source works for OSX and Ubuntu CPU
> > > > > >>>>>>>> Scala build/test successfully with Dynamic link and static
> > > > > >> link.
> > > > > >>>>>>>>
> > > > > >>>>>>>> Thanks,
> > > > > >>>>>>>> Qing
> > > > > >>>>>>>>
> > > > > >>>>>>>> 
> > > > > >>>>>>>> From: Sheng Zha 
> > > > > >>>>>>>> Sent: Wednesday, May 1, 2019 13:14
> > > > > >>>>>>>> To: d...@mxnet.apache.org
> > > > > >>>>>>>> Subject: Re: [VOTE] Release Apache MXNet (incubating)
> > version
> > > > > >>>>> 1.4.1.rc0
> > > > > >>>>>>>>
> > > > > >>>>>>>> Hi all,
> > > > > >>>>>>>>
> > > > > >>>>>>>> Reminder that the vote for 1.4.1 release is still ongoing.
> > If
> > > > > >> you
> > > > > >>>>> can, please help out. Thank you.
> > > > > >>>>>>>>
> > > > > >>>>>>>> -sz
> > > > > >>>>>>>>
> > > > > >>>>>>>> On 2019/04/30 06:51:45, Junru Shao <
> junrushao1...@gmail.com
> > >
> > > > > >>>> wrote:
> > > > > >>>>>>>>> Dear MXNet community,
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> This is the 3-day vote to release Apache MXNet
> (incubating)
> > > > > >>>>> version v1.4.1.
> > > > > >>>>>>>>> The voting on dev@ list will start Apr 29 23:59:59 (PST)
> > and
> > > > > >>>>> close on May
> > > > > >>>>>>>>> 02 23:59:59.
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> Below are links to
> > > > > >>>>>>>>> 1) Release notes:
> > > > > >>>>>>>>>
> > > > > >>>>>
> > > > > >>>>
> > > > > >>>
> > > > > >>
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.1+Release+Notes
> > > > > >>>>>>>>> .
> > > > > >>>>>>>>> 2) Release Candidate:
> > > > > >>>>>>>>>
> > > > > >>>
> https://github.com/apache/incubator-mxnet/releases/tag/1.4.1.rc0
> > > > > >>>> .
> > > > > >>>>>>>>> 3) Source and signatures on Apache dist server:
> > > > > >>>>>>>>>
> > > > > >>>>
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.1.rc0/
> > > .
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> Please remember to TEST first before voting accordingly:
> > > > > >>>>>>>>> +1 = approve
> > > > > >>>>>>>>> +0 = no opinion
> > > > > >>>>>>>>> -1 = disapprove (provide reason)
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> Best regards,
> > > > > >>>>>>>>> Junru Shao
> > > > > >>>>>>>>>
> > > > > >>>>>
> > > > > >>>>
> > > > > >>>
> > > > > >>
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread Damien Stanton
gt;>>>> * Built Scala/Java on top of the cross-compiled arm8 (ran into
> > > > >>> trouble
> > > > >>>>>> here, but I think this is not popular enough yet to derail
> > things,
> > > > >>>>>> plus there are workarounds)
> > > > >>>>>> * Built on CPU instance and tested docs.
> > > > >>>>>> http://34.201.8.176/versions/1.4.1/api/python/io/io.html
> > > > >>>>>> I don't see anything specific being different in this patch
> for
> > > > >> docs,
> > > > >>>>>> so hard to tell if there's an issue. I'll assume not given the
> > > > >>>>>> successful generation of the API docs.
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>> On Wed, May 1, 2019 at 1:28 PM Pedro Larroy
> > > > >>>>>>  wrote:
> > > > >>>>>>>
> > > > >>>>>>> +1 (non-binding)
> > > > >>>>>>>
> > > > >>>>>>> Tried CPU build + C++ tests + 714 Python unit tests in 605s.
> > > > >>>>>>> ARMv7 build + small unit test in QEMU + ARMv8 builds.
> > > > >>>>>>>
> > > > >>>>>>> Thanks. Regards
> > > > >>>>>>>
> > > > >>>>>>> Pedro.
> > > > >>>>>>>
> > > > >>>>>>> On Wed, May 1, 2019 at 10:41 AM Qing Lan <
> lanking...@live.com>
> > > > >>>> wrote:
> > > > >>>>>>>>
> > > > >>>>>>>> +1 (binding)
> > > > >>>>>>>>
> > > > >>>>>>>> build from source works for OSX and Ubuntu CPU
> > > > >>>>>>>> Scala build/test successfully with Dynamic link and static
> > > > >> link.
> > > > >>>>>>>>
> > > > >>>>>>>> Thanks,
> > > > >>>>>>>> Qing
> > > > >>>>>>>>
> > > > >>>>>>>> 
> > > > >>>>>>>> From: Sheng Zha 
> > > > >>>>>>>> Sent: Wednesday, May 1, 2019 13:14
> > > > >>>>>>>> To: d...@mxnet.apache.org
> > > > >>>>>>>> Subject: Re: [VOTE] Release Apache MXNet (incubating)
> version
> > > > >>>>> 1.4.1.rc0
> > > > >>>>>>>>
> > > > >>>>>>>> Hi all,
> > > > >>>>>>>>
> > > > >>>>>>>> Reminder that the vote for 1.4.1 release is still ongoing.
> If
> > > > >> you
> > > > >>>>> can, please help out. Thank you.
> > > > >>>>>>>>
> > > > >>>>>>>> -sz
> > > > >>>>>>>>
> > > > >>>>>>>> On 2019/04/30 06:51:45, Junru Shao  >
> > > > >>>> wrote:
> > > > >>>>>>>>> Dear MXNet community,
> > > > >>>>>>>>>
> > > > >>>>>>>>> This is the 3-day vote to release Apache MXNet (incubating)
> > > > >>>>> version v1.4.1.
> > > > >>>>>>>>> The voting on dev@ list will start Apr 29 23:59:59 (PST)
> and
> > > > >>>>> close on May
> > > > >>>>>>>>> 02 23:59:59.
> > > > >>>>>>>>>
> > > > >>>>>>>>> Below are links to
> > > > >>>>>>>>> 1) Release notes:
> > > > >>>>>>>>>
> > > > >>>>>
> > > > >>>>
> > > > >>>
> > > > >>
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.1+Release+Notes
> > > > >>>>>>>>> .
> > > > >>>>>>>>> 2) Release Candidate:
> > > > >>>>>>>>>
> > > > >>> https://github.com/apache/incubator-mxnet/releases/tag/1.4.1.rc0
> > > > >>>> .
> > > > >>>>>>>>> 3) Source and signatures on Apache dist server:
> > > > >>>>>>>>>
> > > > >>>>
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.1.rc0/
> > .
> > > > >>>>>>>>>
> > > > >>>>>>>>> Please remember to TEST first before voting accordingly:
> > > > >>>>>>>>> +1 = approve
> > > > >>>>>>>>> +0 = no opinion
> > > > >>>>>>>>> -1 = disapprove (provide reason)
> > > > >>>>>>>>>
> > > > >>>>>>>>> Best regards,
> > > > >>>>>>>>> Junru Shao
> > > > >>>>>>>>>
> > > > >>>>>
> > > > >>>>
> > > > >>>
> > > > >>
> > > >
> > > >
> > >
> >
>


Re: DNS failures in jenkins

2019-05-03 Thread Max G. Faraday
Hey,

This sounds likely. Yes, we’ll take a look, if they haven’t already. 


"I'm trying real hard to be the shepherd." -Jules Winnfield


> On Apr 25, 2019, at 8:00 PM, Pedro Larroy  
> wrote:
> 
> Hi
> 
> I see some DNS resolution failures on jenkins, I think this is the
> cause of jenkins not reporting the build status sometimes. What dns
> server are we using in the master? should we add a couple of secondary
> resolvers to remediate?
> 
> http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fwebsite/detail/PR-14788/2/pipeline/
> 
> 
> Thanks.
> 
> Pedro.


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread chohyu01
+1 (non-binding)

Built MXNet from source for CPU target.
I was able to run the quantization example with MKLDNN at 
https://github.com/apache/incubator-mxnet/tree/1.4.1.rc0/example/quantization

Philip.

On 2019/04/30 06:51:45, Junru Shao  wrote: 
> Dear MXNet community,
> 
> This is the 3-day vote to release Apache MXNet (incubating) version v1.4.1.
> The voting on dev@ list will start Apr 29 23:59:59 (PST) and close on May
> 02 23:59:59.
> 
> Below are links to
> 1) Release notes:
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.1+Release+Notes
> .
> 2) Release Candidate:
> https://github.com/apache/incubator-mxnet/releases/tag/1.4.1.rc0.
> 3) Source and signatures on Apache dist server:
> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.1.rc0/.
> 
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
> 
> Best regards,
> Junru Shao
> 


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread kellen sunderland
 Aaron Markham <
> >>>>>>> aaron.s.mark...@gmail.com>
> >>>>>>>>> wrote:
> >>>>>>>>>>
> >>>>>>>>>> +1 (binding)
> >>>>>>>>>>
> >>>>>>>>>> * Built with GPU and tested the first part of the ssd example.
> >>>>>>>>>> * Built with GPU / cross-compiled to arm8 for Jetson.
> >>>>>>>>>> * Built Scala/Java on top of the cross-compiled arm8 (ran into
> >>>>>>> trouble
> >>>>>>>>>> here, but I think this is not popular enough yet to derail
> >> things,
> >>>>>>>>>> plus there are workarounds)
> >>>>>>>>>> * Built on CPU instance and tested docs.
> >>>>>>>>>> http://34.201.8.176/versions/1.4.1/api/python/io/io.html
> >>>>>>>>>> I don't see anything specific being different in this patch for
> >>>>>> docs,
> >>>>>>>>>> so hard to tell if there's an issue. I'll assume not given the
> >>>>>>>>>> successful generation of the API docs.
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On Wed, May 1, 2019 at 1:28 PM Pedro Larroy
> >>>>>>>>>>  wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> +1 (non-binding)
> >>>>>>>>>>>
> >>>>>>>>>>> Tried CPU build + C++ tests + 714 Python unit tests in 605s.
> >>>>>>>>>>> ARMv7 build + small unit test in QEMU + ARMv8 builds.
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks. Regards
> >>>>>>>>>>>
> >>>>>>>>>>> Pedro.
> >>>>>>>>>>>
> >>>>>>>>>>> On Wed, May 1, 2019 at 10:41 AM Qing Lan 
> >>>>>>>> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>> +1 (binding)
> >>>>>>>>>>>>
> >>>>>>>>>>>> build from source works for OSX and Ubuntu CPU
> >>>>>>>>>>>> Scala build/test successfully with Dynamic link and static
> >>>>>> link.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Thanks,
> >>>>>>>>>>>> Qing
> >>>>>>>>>>>>
> >>>>>>>>>>>> 
> >>>>>>>>>>>> From: Sheng Zha 
> >>>>>>>>>>>> Sent: Wednesday, May 1, 2019 13:14
> >>>>>>>>>>>> To: d...@mxnet.apache.org
> >>>>>>>>>>>> Subject: Re: [VOTE] Release Apache MXNet (incubating) version
> >>>>>>>>> 1.4.1.rc0
> >>>>>>>>>>>>
> >>>>>>>>>>>> Hi all,
> >>>>>>>>>>>>
> >>>>>>>>>>>> Reminder that the vote for 1.4.1 release is still ongoing. If
> >>>>>> you
> >>>>>>>>> can, please help out. Thank you.
> >>>>>>>>>>>>
> >>>>>>>>>>>> -sz
> >>>>>>>>>>>>
> >>>>>>>>>>>> On 2019/04/30 06:51:45, Junru Shao 
> >>>>>>>> wrote:
> >>>>>>>>>>>>> Dear MXNet community,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> This is the 3-day vote to release Apache MXNet (incubating)
> >>>>>>>>> version v1.4.1.
> >>>>>>>>>>>>> The voting on dev@ list will start Apr 29 23:59:59 (PST) and
> >>>>>>>>> close on May
> >>>>>>>>>>>>> 02 23:59:59.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Below are links to
> >>>>>>>>>>>>> 1) Release notes:
> >>>>>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >>>
> >>
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.1+Release+Notes
> >>>>>>>>>>>>> .
> >>>>>>>>>>>>> 2) Release Candidate:
> >>>>>>>>>>>>>
> >>>>>>> https://github.com/apache/incubator-mxnet/releases/tag/1.4.1.rc0
> >>>>>>>> .
> >>>>>>>>>>>>> 3) Source and signatures on Apache dist server:
> >>>>>>>>>>>>>
> >>>>>>>> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.1.rc0/
> >> .
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Please remember to TEST first before voting accordingly:
> >>>>>>>>>>>>> +1 = approve
> >>>>>>>>>>>>> +0 = no opinion
> >>>>>>>>>>>>> -1 = disapprove (provide reason)
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Best regards,
> >>>>>>>>>>>>> Junru Shao
> >>>>>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >>>>
> >>>
> >>
>


Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-03 Thread Konstantin Ivlev
hi Sheng Zha,

on pull request review I was told by Anirudh anirudhacharya and Roshani
Nagmote to start discussion/vote on the mxnet dev list. it seems to be a
vicious circle now - on GitHub I am told to use vote, and on vote I am told
to use GitHub, this doesn't help much.
FYI GitHub review stuck, it's already opened since November 2018, and it's
still not approved (however, there were no objections during the review).
Previous discussion in e-mail thread also didn't encounter any objections,
and all questions were answered.
JIRA ticket has no discussion at all (except it has duplicates of comments
made on GitHub).
so let's process with 3-day vote for now, as other communication channels
were already tried with no success.

yours sincerely, Konstantin

пт, 3 мая 2019 г. в 14:17, Sheng Zha :

> Hi Konstantin,
>
> While conan looks like an option that's worth exploring, given that your
> request is to merge the pull request, I'd suggest that the request should
> go through the regular pull request review and it doesn't really need a
> vote (as it doesn't substitute reviews anyway)
>
> If you would like to gather more attention to it, feel free to ping in a
> discussion thread.
>
> -sz
>
> On 2019/05/03 06:29:55, Konstantin Ivlev  wrote:
> > Dear MXNet community,
> >
> > This is the 3-day vote to add conan support for Apache MXNet (incubating)
> > version v1.4.1.
> > The voting on dev@ list will start May 03 23:59:59 (PST) and close on
> May
> > 06 23:59:59.
> >
> > Background: conan is open-source, freeware, cross-platform package
> manager
> > for C and C++ projects, written in python. it provides integration with
> > various build systems, include CMake. conan may use bintray as a server
> to
> > store and download pre-built packages, or packages might be always built
> > from sources.
> >
> > Problem: currently (as for v1.4.1), Apache MXNet (incubating) is using
> > several ways to fetch 3rd-party dependencies simultaneously, for
> instance:
> > 1. download GitHub archives during the build
> > - OpenBLAS
> > - OpenCV
> > 2. conda (alternative way to GitHub archives)
> > 3. download from CMake
> > - Intel Math Kernel Library (MKL)
> > 4. Git submodules
> > - cub
> > - dlpack
> > - dmlc-core
> > - googletest
> > - mkldnn
> > - mshadow
> > - onnx-tensorrt
> > - openmp
> > - ps-lite
> > - tvm
> > therefore, there are multiple places to look for 3rd parties, and its
> hard
> > to update them, as you need to remember or figure it out how to update a
> > particular dependency to newer version, for instance.
> > current Apache MXNet (incubating) build instructions differ very much per
> > platform, require to download and unzip some archives manually,
> specifying
> > variables with paths to this archives, in conjunction of updating git
> > submodules,
> >
> > Action: merge pull request providing an initial conan support for Apache
> > MXNet (incubating). support conan as an alternate approach to fetch
> various
> > 3rd-party dependencies. old approaches will be still available, supported
> > and left intact.
> >
> > Below are links to
> > 1) conan web-site:  https://conan.io/
> > 2) conan GitHub repository: https://github.com/conan-io/conan
> > 3) conan documentation: https://docs.conan.io/en/latest/
> > 4) bintray: https://bintray.com
> > 5) pull request adding conan support to Apache MXNet (incubating):
> > https://github.com/apache/incubator-mxnet/pull/13400
> > 6) JIRA issue: https://issues.apache.org/jira/browse/MXNET-1229
> > 7) previous email discussion:
> >
> https://lists.apache.org/thread.html/301a46a637f7e3c249c475713f701bef7530c32bc92d8834c0882897@%3Cdev.mxnet.apache.org%3E
> > 8) MXNet build instructions:
> > https://mxnet-tqchen.readthedocs.io/en/latest/how_to/build.html
> > 9) MXNet build instructions (Windows):
> >
> https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html
> > 10) MXNet build instructions (OSX):
> > http://mxnet.incubator.apache.org/versions/master/install/osx_setup.html
> > 11) MXNet build instructions (Linux):
> >
> http://mxnet.incubator.apache.org/versions/master/install/ubuntu_setup.html
> > 12) MXNet development setup (OSX):
> >
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Developer+Setup+on+Mac
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> > Best regards,
> > Konstantin Ivlev
> >
>


Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-03 Thread Konstantin Ivlev
hi Sheng Zha,

on pull request review I was told by Anirudh anirudhacharya and Roshani
Nagmote to start discussion/vote on the mxnet dev list. it seems to be a
vicious circle now - on GitHub I am told to use vote, and on vote I am told
to use GitHub, this doesn't help much.
FYI GitHub review stuck, it's already opened since November 2018, and it's
still not approved (however, there were no objections during the review).
Previous discussion in e-mail thread also didn't encounter any objections,
and all questions were answered.
JIRA ticket has no discussion at all (except it has duplicates of comments
made on GitHub).
so let's process with 3-day vote for now, as other communication channels
were already tried with no success.

yours sincerely, Konstantin

пт, 3 мая 2019 г. в 14:17, Sheng Zha :

> Hi Konstantin,
>
> While conan looks like an option that's worth exploring, given that your
> request is to merge the pull request, I'd suggest that the request should
> go through the regular pull request review and it doesn't really need a
> vote (as it doesn't substitute reviews anyway)
>
> If you would like to gather more attention to it, feel free to ping in a
> discussion thread.
>
> -sz
>
> On 2019/05/03 06:29:55, Konstantin Ivlev  wrote:
> > Dear MXNet community,
> >
> > This is the 3-day vote to add conan support for Apache MXNet (incubating)
> > version v1.4.1.
> > The voting on dev@ list will start May 03 23:59:59 (PST) and close on
> May
> > 06 23:59:59.
> >
> > Background: conan is open-source, freeware, cross-platform package
> manager
> > for C and C++ projects, written in python. it provides integration with
> > various build systems, include CMake. conan may use bintray as a server
> to
> > store and download pre-built packages, or packages might be always built
> > from sources.
> >
> > Problem: currently (as for v1.4.1), Apache MXNet (incubating) is using
> > several ways to fetch 3rd-party dependencies simultaneously, for
> instance:
> > 1. download GitHub archives during the build
> > - OpenBLAS
> > - OpenCV
> > 2. conda (alternative way to GitHub archives)
> > 3. download from CMake
> > - Intel Math Kernel Library (MKL)
> > 4. Git submodules
> > - cub
> > - dlpack
> > - dmlc-core
> > - googletest
> > - mkldnn
> > - mshadow
> > - onnx-tensorrt
> > - openmp
> > - ps-lite
> > - tvm
> > therefore, there are multiple places to look for 3rd parties, and its
> hard
> > to update them, as you need to remember or figure it out how to update a
> > particular dependency to newer version, for instance.
> > current Apache MXNet (incubating) build instructions differ very much per
> > platform, require to download and unzip some archives manually,
> specifying
> > variables with paths to this archives, in conjunction of updating git
> > submodules,
> >
> > Action: merge pull request providing an initial conan support for Apache
> > MXNet (incubating). support conan as an alternate approach to fetch
> various
> > 3rd-party dependencies. old approaches will be still available, supported
> > and left intact.
> >
> > Below are links to
> > 1) conan web-site:  https://conan.io/
> > 2) conan GitHub repository: https://github.com/conan-io/conan
> > 3) conan documentation: https://docs.conan.io/en/latest/
> > 4) bintray: https://bintray.com
> > 5) pull request adding conan support to Apache MXNet (incubating):
> > https://github.com/apache/incubator-mxnet/pull/13400
> > 6) JIRA issue: https://issues.apache.org/jira/browse/MXNET-1229
> > 7) previous email discussion:
> >
> https://lists.apache.org/thread.html/301a46a637f7e3c249c475713f701bef7530c32bc92d8834c0882897@%3Cdev.mxnet.apache.org%3E
> > 8) MXNet build instructions:
> > https://mxnet-tqchen.readthedocs.io/en/latest/how_to/build.html
> > 9) MXNet build instructions (Windows):
> >
> https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html
> > 10) MXNet build instructions (OSX):
> > http://mxnet.incubator.apache.org/versions/master/install/osx_setup.html
> > 11) MXNet build instructions (Linux):
> >
> http://mxnet.incubator.apache.org/versions/master/install/ubuntu_setup.html
> > 12) MXNet development setup (OSX):
> >
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Developer+Setup+on+Mac
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> > Best regards,
> > Konstantin Ivlev
> >
>


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread Sheng Zha
Hi Kellen,

Of course, feel free to count in my vote if that’s ok. Since I helped prepare 
the artifacts I wasn’t sure if it was appropriate for me to vote so I refrained 
from voting till now.

+1

-sz

> On May 3, 2019, at 12:19 AM, kellen sunderland  
> wrote:
> 
> Hi Junru could you give a quick summary of the binding / non-binding votes.
> 
> Damien just want to confirm, are you a member of the PPMC for MXNet?
> Usually committers or community members (like most of us) are encouraged to
> test and vote, but technically count as non-binding for releases.
> 
> Sheng can we assume you're +1 on the release?
> 
>> On Fri, May 3, 2019 at 12:09 AM Junru Shao  wrote:
>> 
>> Hi folks,
>> 
>> So far we have collected enough binding votes. Thank you guys for the hard
>> work testing the release!
>> 
>> The vote on dev@ is closed on May 02 23:59:59 (PST). Next, we are going to
>> vote for the Apache MXNet (incubating) release 1.4.1 on general@ tomorrow,
>> which starts on May 3 2019, 23:59:59 PST, and ends on May 07 2019, 23:59:59
>> PST.
>> 
>> Best,
>> Junru
>> 
>>> On Thu, May 2, 2019 at 11:29 PM Aston Zhang  wrote:
>>> 
>>> +1 (non-binding)
>>> 
>>> Passed all the code at zh.d2l.ai
>>> 
>>> On Thu, May 2, 2019 at 1:46 PM Joshua Z. Zhang 
>>> wrote:
>>> 
>>>> +1 (non-binding)
>>>> 
>>>> Build from source with cuda/cudnn.
>>>> 
>>>> - All tests passed
>>>> - GluonCV unittest scripts passed
>>>> - GluonCV training scripts passed
>>>> - No issue with python multiprocessing
>>>> 
>>>> Best,
>>>> Zhi
>>>>> On May 2, 2019, at 11:34 AM, kellen sunderland <
>>>> kellen.sunderl...@gmail.com> wrote:
>>>>> 
>>>>> +1 (non-binding)
>>>>> 
>>>>> I checked TRT integration builds and tests pass.
>>>>> MD5s
>>>>> Sigs look good.
>>>>> 
>>>>> -Kellen
>>>>> 
>>>>> On Thu, May 2, 2019 at 10:51 AM Damien Stanton <
>>> damien.stan...@gmail.com
>>>>> 
>>>>> wrote:
>>>>> 
>>>>>> +1 (binding)
>>>>>> 
>>>>>> Built from source / Scala / Clojure. All tests pass. The only issue
>> of
>>>>>> minor note: The macOS build guide indicates a directive `brew
>> install
>>>>>> opencv` however this installs OpenCV 4, which is currently
>>> incompatible
>>>>>> with mxnet and causes a failed build. The guide should specify `brew
>>>>>> install opencv@3` until/if version 4 is supported.
>>>>>> 
>>>>>> Best,
>>>>>> Damien
>>>>>> 
>>>>>> On Thu, May 2, 2019 at 12:53 PM Lai Wei 
>> wrote:
>>>>>> 
>>>>>>> +1
>>>>>>> 
>>>>>>> Built from source and tested keras-mxnet working fine.
>>>>>>> 
>>>>>>> Best Regards
>>>>>>> 
>>>>>>> Lai
>>>>>>> 
>>>>>>> 
>>>>>>> On Wed, May 1, 2019 at 4:22 PM Carin Meier 
>>>> wrote:
>>>>>>> 
>>>>>>>> + 1 (binding)
>>>>>>>> 
>>>>>>>> Built Scala/ Clojure and ran tests
>>>>>>>> 
>>>>>>>> On Wed, May 1, 2019 at 7:06 PM Aaron Markham <
>>>>>> aaron.s.mark...@gmail.com>
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Make that +1 (non-binding)
>>>>>>>>> 
>>>>>>>>> On Wed, May 1, 2019 at 3:42 PM Aaron Markham <
>>>>>>> aaron.s.mark...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>> +1 (binding)
>>>>>>>>>> 
>>>>>>>>>> * Built with GPU and tested the first part of the ssd example.
>>>>>>>>>> * Built with GPU / cross-compiled to arm8 for Jetson.
>>>>>>>>>> * Built Scala/Java on top of the cross-compiled arm8 (ran into
>>>>>>> trouble
>>>>>>>>>> here, but I think this is not popular enough yet to derail
>> things,
>>>

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread kellen sunderland
Hi Junru could you give a quick summary of the binding / non-binding votes.

Damien just want to confirm, are you a member of the PPMC for MXNet?
Usually committers or community members (like most of us) are encouraged to
test and vote, but technically count as non-binding for releases.

Sheng can we assume you're +1 on the release?

On Fri, May 3, 2019 at 12:09 AM Junru Shao  wrote:

> Hi folks,
>
> So far we have collected enough binding votes. Thank you guys for the hard
> work testing the release!
>
> The vote on dev@ is closed on May 02 23:59:59 (PST). Next, we are going to
> vote for the Apache MXNet (incubating) release 1.4.1 on general@ tomorrow,
> which starts on May 3 2019, 23:59:59 PST, and ends on May 07 2019, 23:59:59
> PST.
>
> Best,
> Junru
>
> On Thu, May 2, 2019 at 11:29 PM Aston Zhang  wrote:
>
> > +1 (non-binding)
> >
> > Passed all the code at zh.d2l.ai
> >
> > On Thu, May 2, 2019 at 1:46 PM Joshua Z. Zhang 
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > Build from source with cuda/cudnn.
> > >
> > > - All tests passed
> > > - GluonCV unittest scripts passed
> > > - GluonCV training scripts passed
> > > - No issue with python multiprocessing
> > >
> > > Best,
> > > Zhi
> > > > On May 2, 2019, at 11:34 AM, kellen sunderland <
> > > kellen.sunderl...@gmail.com> wrote:
> > > >
> > > > +1 (non-binding)
> > > >
> > > > I checked TRT integration builds and tests pass.
> > > > MD5s
> > > > Sigs look good.
> > > >
> > > > -Kellen
> > > >
> > > > On Thu, May 2, 2019 at 10:51 AM Damien Stanton <
> > damien.stan...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> +1 (binding)
> > > >>
> > > >> Built from source / Scala / Clojure. All tests pass. The only issue
> of
> > > >> minor note: The macOS build guide indicates a directive `brew
> install
> > > >> opencv` however this installs OpenCV 4, which is currently
> > incompatible
> > > >> with mxnet and causes a failed build. The guide should specify `brew
> > > >> install opencv@3` until/if version 4 is supported.
> > > >>
> > > >> Best,
> > > >> Damien
> > > >>
> > > >> On Thu, May 2, 2019 at 12:53 PM Lai Wei 
> wrote:
> > > >>
> > > >>> +1
> > > >>>
> > > >>> Built from source and tested keras-mxnet working fine.
> > > >>>
> > > >>> Best Regards
> > > >>>
> > > >>> Lai
> > > >>>
> > > >>>
> > > >>> On Wed, May 1, 2019 at 4:22 PM Carin Meier 
> > > wrote:
> > > >>>
> > > >>>> + 1 (binding)
> > > >>>>
> > > >>>> Built Scala/ Clojure and ran tests
> > > >>>>
> > > >>>> On Wed, May 1, 2019 at 7:06 PM Aaron Markham <
> > > >> aaron.s.mark...@gmail.com>
> > > >>>> wrote:
> > > >>>>
> > > >>>>> Make that +1 (non-binding)
> > > >>>>>
> > > >>>>> On Wed, May 1, 2019 at 3:42 PM Aaron Markham <
> > > >>> aaron.s.mark...@gmail.com>
> > > >>>>> wrote:
> > > >>>>>>
> > > >>>>>> +1 (binding)
> > > >>>>>>
> > > >>>>>> * Built with GPU and tested the first part of the ssd example.
> > > >>>>>> * Built with GPU / cross-compiled to arm8 for Jetson.
> > > >>>>>> * Built Scala/Java on top of the cross-compiled arm8 (ran into
> > > >>> trouble
> > > >>>>>> here, but I think this is not popular enough yet to derail
> things,
> > > >>>>>> plus there are workarounds)
> > > >>>>>> * Built on CPU instance and tested docs.
> > > >>>>>> http://34.201.8.176/versions/1.4.1/api/python/io/io.html
> > > >>>>>> I don't see anything specific being different in this patch for
> > > >> docs,
> > > >>>>>> so hard to tell if there's an issue. I'll assume not given the
> > > >>>>>> successful generation of the API docs.
> > > &g

Re: [VOTE] add conan support for Apache MXNet (incubating)

2019-05-03 Thread Sheng Zha
Hi Konstantin,

While conan looks like an option that's worth exploring, given that your 
request is to merge the pull request, I'd suggest that the request should go 
through the regular pull request review and it doesn't really need a vote (as 
it doesn't substitute reviews anyway)

If you would like to gather more attention to it, feel free to ping in a 
discussion thread.

-sz

On 2019/05/03 06:29:55, Konstantin Ivlev  wrote: 
> Dear MXNet community,
> 
> This is the 3-day vote to add conan support for Apache MXNet (incubating)
> version v1.4.1.
> The voting on dev@ list will start May 03 23:59:59 (PST) and close on May
> 06 23:59:59.
> 
> Background: conan is open-source, freeware, cross-platform package manager
> for C and C++ projects, written in python. it provides integration with
> various build systems, include CMake. conan may use bintray as a server to
> store and download pre-built packages, or packages might be always built
> from sources.
> 
> Problem: currently (as for v1.4.1), Apache MXNet (incubating) is using
> several ways to fetch 3rd-party dependencies simultaneously, for instance:
> 1. download GitHub archives during the build
> - OpenBLAS
> - OpenCV
> 2. conda (alternative way to GitHub archives)
> 3. download from CMake
> - Intel Math Kernel Library (MKL)
> 4. Git submodules
> - cub
> - dlpack
> - dmlc-core
> - googletest
> - mkldnn
> - mshadow
> - onnx-tensorrt
> - openmp
> - ps-lite
> - tvm
> therefore, there are multiple places to look for 3rd parties, and its hard
> to update them, as you need to remember or figure it out how to update a
> particular dependency to newer version, for instance.
> current Apache MXNet (incubating) build instructions differ very much per
> platform, require to download and unzip some archives manually, specifying
> variables with paths to this archives, in conjunction of updating git
> submodules,
> 
> Action: merge pull request providing an initial conan support for Apache
> MXNet (incubating). support conan as an alternate approach to fetch various
> 3rd-party dependencies. old approaches will be still available, supported
> and left intact.
> 
> Below are links to
> 1) conan web-site:  https://conan.io/
> 2) conan GitHub repository: https://github.com/conan-io/conan
> 3) conan documentation: https://docs.conan.io/en/latest/
> 4) bintray: https://bintray.com
> 5) pull request adding conan support to Apache MXNet (incubating):
> https://github.com/apache/incubator-mxnet/pull/13400
> 6) JIRA issue: https://issues.apache.org/jira/browse/MXNET-1229
> 7) previous email discussion:
> https://lists.apache.org/thread.html/301a46a637f7e3c249c475713f701bef7530c32bc92d8834c0882897@%3Cdev.mxnet.apache.org%3E
> 8) MXNet build instructions:
> https://mxnet-tqchen.readthedocs.io/en/latest/how_to/build.html
> 9) MXNet build instructions (Windows):
> https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html
> 10) MXNet build instructions (OSX):
> http://mxnet.incubator.apache.org/versions/master/install/osx_setup.html
> 11) MXNet build instructions (Linux):
> http://mxnet.incubator.apache.org/versions/master/install/ubuntu_setup.html
> 12) MXNet development setup (OSX):
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Developer+Setup+on+Mac
> 
> Please remember to TEST first before voting accordingly:
> +1 = approve
> +0 = no opinion
> -1 = disapprove (provide reason)
> 
> Best regards,
> Konstantin Ivlev
> 


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread Junru Shao
Hi folks,

So far we have collected enough binding votes. Thank you guys for the hard
work testing the release!

The vote on dev@ is closed on May 02 23:59:59 (PST). Next, we are going to
vote for the Apache MXNet (incubating) release 1.4.1 on general@ tomorrow,
which starts on May 3 2019, 23:59:59 PST, and ends on May 07 2019, 23:59:59
PST.

Best,
Junru

On Thu, May 2, 2019 at 11:29 PM Aston Zhang  wrote:

> +1 (non-binding)
>
> Passed all the code at zh.d2l.ai
>
> On Thu, May 2, 2019 at 1:46 PM Joshua Z. Zhang 
> wrote:
>
> > +1 (non-binding)
> >
> > Build from source with cuda/cudnn.
> >
> > - All tests passed
> > - GluonCV unittest scripts passed
> > - GluonCV training scripts passed
> > - No issue with python multiprocessing
> >
> > Best,
> > Zhi
> > > On May 2, 2019, at 11:34 AM, kellen sunderland <
> > kellen.sunderl...@gmail.com> wrote:
> > >
> > > +1 (non-binding)
> > >
> > > I checked TRT integration builds and tests pass.
> > > MD5s
> > > Sigs look good.
> > >
> > > -Kellen
> > >
> > > On Thu, May 2, 2019 at 10:51 AM Damien Stanton <
> damien.stan...@gmail.com
> > >
> > > wrote:
> > >
> > >> +1 (binding)
> > >>
> > >> Built from source / Scala / Clojure. All tests pass. The only issue of
> > >> minor note: The macOS build guide indicates a directive `brew install
> > >> opencv` however this installs OpenCV 4, which is currently
> incompatible
> > >> with mxnet and causes a failed build. The guide should specify `brew
> > >> install opencv@3` until/if version 4 is supported.
> > >>
> > >> Best,
> > >> Damien
> > >>
> > >> On Thu, May 2, 2019 at 12:53 PM Lai Wei  wrote:
> > >>
> > >>> +1
> > >>>
> > >>> Built from source and tested keras-mxnet working fine.
> > >>>
> > >>> Best Regards
> > >>>
> > >>> Lai
> > >>>
> > >>>
> > >>> On Wed, May 1, 2019 at 4:22 PM Carin Meier 
> > wrote:
> > >>>
> > >>>> + 1 (binding)
> > >>>>
> > >>>> Built Scala/ Clojure and ran tests
> > >>>>
> > >>>> On Wed, May 1, 2019 at 7:06 PM Aaron Markham <
> > >> aaron.s.mark...@gmail.com>
> > >>>> wrote:
> > >>>>
> > >>>>> Make that +1 (non-binding)
> > >>>>>
> > >>>>> On Wed, May 1, 2019 at 3:42 PM Aaron Markham <
> > >>> aaron.s.mark...@gmail.com>
> > >>>>> wrote:
> > >>>>>>
> > >>>>>> +1 (binding)
> > >>>>>>
> > >>>>>> * Built with GPU and tested the first part of the ssd example.
> > >>>>>> * Built with GPU / cross-compiled to arm8 for Jetson.
> > >>>>>> * Built Scala/Java on top of the cross-compiled arm8 (ran into
> > >>> trouble
> > >>>>>> here, but I think this is not popular enough yet to derail things,
> > >>>>>> plus there are workarounds)
> > >>>>>> * Built on CPU instance and tested docs.
> > >>>>>> http://34.201.8.176/versions/1.4.1/api/python/io/io.html
> > >>>>>> I don't see anything specific being different in this patch for
> > >> docs,
> > >>>>>> so hard to tell if there's an issue. I'll assume not given the
> > >>>>>> successful generation of the API docs.
> > >>>>>>
> > >>>>>>
> > >>>>>> On Wed, May 1, 2019 at 1:28 PM Pedro Larroy
> > >>>>>>  wrote:
> > >>>>>>>
> > >>>>>>> +1 (non-binding)
> > >>>>>>>
> > >>>>>>> Tried CPU build + C++ tests + 714 Python unit tests in 605s.
> > >>>>>>> ARMv7 build + small unit test in QEMU + ARMv8 builds.
> > >>>>>>>
> > >>>>>>> Thanks. Regards
> > >>>>>>>
> > >>>>>>> Pedro.
> > >>>>>>>
> > >>>>>>> On Wed, May 1, 2019 at 10:41 AM Qing Lan 
> > >>>> wrote:
> > >>>>>>>>
> >

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-03 Thread Aston Zhang
+1 (non-binding)

Passed all the code at zh.d2l.ai

On Thu, May 2, 2019 at 1:46 PM Joshua Z. Zhang  wrote:

> +1 (non-binding)
>
> Build from source with cuda/cudnn.
>
> - All tests passed
> - GluonCV unittest scripts passed
> - GluonCV training scripts passed
> - No issue with python multiprocessing
>
> Best,
> Zhi
> > On May 2, 2019, at 11:34 AM, kellen sunderland <
> kellen.sunderl...@gmail.com> wrote:
> >
> > +1 (non-binding)
> >
> > I checked TRT integration builds and tests pass.
> > MD5s
> > Sigs look good.
> >
> > -Kellen
> >
> > On Thu, May 2, 2019 at 10:51 AM Damien Stanton  >
> > wrote:
> >
> >> +1 (binding)
> >>
> >> Built from source / Scala / Clojure. All tests pass. The only issue of
> >> minor note: The macOS build guide indicates a directive `brew install
> >> opencv` however this installs OpenCV 4, which is currently incompatible
> >> with mxnet and causes a failed build. The guide should specify `brew
> >> install opencv@3` until/if version 4 is supported.
> >>
> >> Best,
> >> Damien
> >>
> >> On Thu, May 2, 2019 at 12:53 PM Lai Wei  wrote:
> >>
> >>> +1
> >>>
> >>> Built from source and tested keras-mxnet working fine.
> >>>
> >>> Best Regards
> >>>
> >>> Lai
> >>>
> >>>
> >>> On Wed, May 1, 2019 at 4:22 PM Carin Meier 
> wrote:
> >>>
> >>>> + 1 (binding)
> >>>>
> >>>> Built Scala/ Clojure and ran tests
> >>>>
> >>>> On Wed, May 1, 2019 at 7:06 PM Aaron Markham <
> >> aaron.s.mark...@gmail.com>
> >>>> wrote:
> >>>>
> >>>>> Make that +1 (non-binding)
> >>>>>
> >>>>> On Wed, May 1, 2019 at 3:42 PM Aaron Markham <
> >>> aaron.s.mark...@gmail.com>
> >>>>> wrote:
> >>>>>>
> >>>>>> +1 (binding)
> >>>>>>
> >>>>>> * Built with GPU and tested the first part of the ssd example.
> >>>>>> * Built with GPU / cross-compiled to arm8 for Jetson.
> >>>>>> * Built Scala/Java on top of the cross-compiled arm8 (ran into
> >>> trouble
> >>>>>> here, but I think this is not popular enough yet to derail things,
> >>>>>> plus there are workarounds)
> >>>>>> * Built on CPU instance and tested docs.
> >>>>>> http://34.201.8.176/versions/1.4.1/api/python/io/io.html
> >>>>>> I don't see anything specific being different in this patch for
> >> docs,
> >>>>>> so hard to tell if there's an issue. I'll assume not given the
> >>>>>> successful generation of the API docs.
> >>>>>>
> >>>>>>
> >>>>>> On Wed, May 1, 2019 at 1:28 PM Pedro Larroy
> >>>>>>  wrote:
> >>>>>>>
> >>>>>>> +1 (non-binding)
> >>>>>>>
> >>>>>>> Tried CPU build + C++ tests + 714 Python unit tests in 605s.
> >>>>>>> ARMv7 build + small unit test in QEMU + ARMv8 builds.
> >>>>>>>
> >>>>>>> Thanks. Regards
> >>>>>>>
> >>>>>>> Pedro.
> >>>>>>>
> >>>>>>> On Wed, May 1, 2019 at 10:41 AM Qing Lan 
> >>>> wrote:
> >>>>>>>>
> >>>>>>>> +1 (binding)
> >>>>>>>>
> >>>>>>>> build from source works for OSX and Ubuntu CPU
> >>>>>>>> Scala build/test successfully with Dynamic link and static
> >> link.
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> Qing
> >>>>>>>>
> >>>>>>>> 
> >>>>>>>> From: Sheng Zha 
> >>>>>>>> Sent: Wednesday, May 1, 2019 13:14
> >>>>>>>> To: d...@mxnet.apache.org
> >>>>>>>> Subject: Re: [VOTE] Release Apache MXNet (incubating) version
> >>>>> 1.4.1.rc0
> >>>>>>>>
> >>>>>>>> Hi all,
> >>>>>>>>
> >>>>>>>> Reminder that the vote for 1.4.1 release is still ongoing. If
> >> you
> >>>>> can, please help out. Thank you.
> >>>>>>>>
> >>>>>>>> -sz
> >>>>>>>>
> >>>>>>>> On 2019/04/30 06:51:45, Junru Shao 
> >>>> wrote:
> >>>>>>>>> Dear MXNet community,
> >>>>>>>>>
> >>>>>>>>> This is the 3-day vote to release Apache MXNet (incubating)
> >>>>> version v1.4.1.
> >>>>>>>>> The voting on dev@ list will start Apr 29 23:59:59 (PST) and
> >>>>> close on May
> >>>>>>>>> 02 23:59:59.
> >>>>>>>>>
> >>>>>>>>> Below are links to
> >>>>>>>>> 1) Release notes:
> >>>>>>>>>
> >>>>>
> >>>>
> >>>
> >>
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.1+Release+Notes
> >>>>>>>>> .
> >>>>>>>>> 2) Release Candidate:
> >>>>>>>>>
> >>> https://github.com/apache/incubator-mxnet/releases/tag/1.4.1.rc0
> >>>> .
> >>>>>>>>> 3) Source and signatures on Apache dist server:
> >>>>>>>>>
> >>>> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.1.rc0/.
> >>>>>>>>>
> >>>>>>>>> Please remember to TEST first before voting accordingly:
> >>>>>>>>> +1 = approve
> >>>>>>>>> +0 = no opinion
> >>>>>>>>> -1 = disapprove (provide reason)
> >>>>>>>>>
> >>>>>>>>> Best regards,
> >>>>>>>>> Junru Shao
> >>>>>>>>>
> >>>>>
> >>>>
> >>>
> >>
>
>


  1   2   3   4   5   6   7   8   9   10   >