Closed #17968.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17968#event-3568205765
NNPACK is currently only supported in the Makefile build
(https://github.com/apache/incubator-mxnet/issues/15974), which will be
removed. I think oneDNN (mkldnn) replaced it and we can remove it. Any concerns?
--
You are receiving this because you authored the thread.
Reply to this email
> Violates the effort of removing libcuda.so totally, (would be great if
> someone can elaborate the motivation behind it).
Many customers use a single mxnet build that supports gpu features and deploy
it to both gpu and cpu machines. Due to the way how cuda containers are
designed, libcuda.so
One of the selling points of MXNet is (or used to be) speed and having multiple
releases in series with speed regressions may not be acceptable to users that
adopted MXNet based on the speed advantage. Should we vote on a 1.7 Beta release
and only vote on 1.7 final release once the regressions
nue with the option of "I believe PPMC would prefer to put the
> ASF header on top of the file (ie. 2 headers)"
>
> Thanks,
> -Ciyong
>
> -Original Message-
> From: Leonard Lausen
> Sent: Tuesday, June 16, 2020 7:06 AM
> To: dev@mxnet.incubator.ap
Hi Justin,
Thank you. Please note that the libgfortran.so is not subject to the GPL in this
case as one may relicense it under the Apache License 2 based on the license
grant by the GCC developers. Thus any policy wrt to GPL is unrelated, as we are
not talking about GPL software.
The ticket has
As per the consensus in [1], I requested INFRA to delete the MXNet convenience
binaries on repository.apache.org [2].
Zach previously offered to organize a third-party Maven distribution. However,
based on recently updated version 0.4 of the draft Apache Downstream
Distribution Branding Policy
Hi Justin,
thank you for clarifying the major modification threshold. To clarify on the
scope of modification in MXNet: re-implementing the functionality in C++ is just
one aspect. MXNet generally provides more features compared to the original
implementation, such as automatic gradient
fortable with our PMC making best effort
> > decisions based on the ASF guidelines?
> >
> >
> > - Bob
> >
> >
> > [1]
> > https://lists.apache.org/thread.html/rb83ff64bdac464df2f0cf2fe8fb4c6b9d3b8fa62b645763dc606045f%40%3Cgeneral.incubator.
c8f69d64fd2b6eb253ade311fe7%401451947855%40%3Cgeneral.incubator.apache.org%3E
>
> [4] https://github.com/apache/trafodion/blob/master/core/sql/parser/ulexer.h
>
> [5] https://github.com/numpy/numpy/blob/master/numpy/core/einsumfunc.py
>
> [6]
> https://github.com/apache/in
Thank you Ciyong. After further investigation, the build issue is not as severe
as initially claimed on Github. I checked the high-water memory usage during
single-process build: It's 2.7GB on master. On 1.7 release, high-level usage is
2.2GB. This is much more acceptable than the previously
s MIT license only. Apache License header was added
> when it was checked into MXNet repo with modifications)
>
> 3) You are asking if the files can remain in the MXNet repository with both
> license headers.
>
> - Bob
>
> On 6/9/2020 5:07 PM, Leonard Lausen wrote:
>
Hi Mentors,
https://www.apache.org/legal/src-headers.html#3party states the 5 rules for
handling third-party code included in the project [1]. In particular PPMC shall
handle major modifications on a case-by-case basis.
But the other rules state
> 1. Do not modify or remove any copyright
understanding
for that.
Thanks
Leonard
Bertrand Delacretaz writes:
> Hi,
>
> On Thu, Jun 4, 2020 at 8:44 AM Leonard Lausen wrote:
>> ...Does adding the following notice pior to any mentioning of a third-party
>> binary release work for clearly informing users?...
>
Hi Justin,
as there have been a couple of mails on the dev@ list prior to your mail
to general@ list and your mail contains a dramatic opening, I'd like to
provide some context here.
The problem in the current focus is how to ensure the
http://mxnet.apache.org/get_started page is compliant with
Hi Justin,
Justin Mclean writes:
> It’s quite clear they should not be linked to from an Apache page
> like this as users will think these are Apache releases. Please remove
> them, after that bring it up on the incubator general list and we can
> discuss what needs to be done.
The status quo
Hi Justin,
this page currently contains some links to third-party binary distributions of
MXNet (for example at [1]). The question of what the PPMC should recommend those
third-parties to avoid trademarking issues is currently being discussed on
private@ and trademark@.
With respect to the MXNet
Another data point is that we currently only support OpenJDK 8 but the JVM
languages are broken with OpenJDK 11 which is used on Ubuntu 18.04 for example.
See https://github.com/apache/incubator-mxnet/issues/18153
--
You are receiving this because you commented.
Reply to this email directly or
https://github.com/apache/incubator-mxnet/commit/fb73a1717acad61caeaeef010faed9e9fcc05f0e
implements the proposal, fixing a number of other issues that were blocking.
Please see the commit message for a complete list of changes.
As a follow-up item, I suggest to remove the `cpplint` we
Another data point is that all of our Scala tests fail randomly with
`src/c_api/c_api_profile.cc:141: Check failed:
!thread_profiling_data.calls_.empty():`, so there seem to be some underlying
issues.
https://github.com/apache/incubator-mxnet/issues/17067
--
You are receiving this because
## Description
I propose to raise our toolchain requirements for the MXNet 2 development
branch to require at minimum gcc7 or clang6 on Unix systems and MSVC 2019 on
Windows system. All 3 have [reasonable complete C++17
support](https://en.cppreference.com/w/cpp/compiler_support#cpp17) and MSVC
It's not only about the API documentation. Installation instructions or
tutorials will change over time. Building the website independently for
different versions may be the simplest approach. I'm also fine with any other
approach that enables users to look up documentation and instructions for
Even for 1.x, the current instructions are not compatible with stable 1.6
release. We should build the website based on 1.6 release branch until a
version selection is available.
--
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
We may also drop ONNX in MXNet 2. I'm not aware of anyone working on ONNX in
MXNet and TVM can be used as a replacement.
--
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
@kalcohol please create a new issue about "static linking lib is (very) far
away from easy to use", describing your setup in more detail and potentially
suggestions how to improve the user experience.
--
You are receiving this because you were mentioned.
Reply to this email directly or view it
> This seems to be a big change to the existing operator mode (imperative and
> symbolic).
Essentially the motivation for deferred compute is to extend imperative mode to
enable users to "construct a symbol" without using symbolic API. This addresses
confusion around having two APIs and
In the past we always kept development on the master branch, thus how about
branching out 1.7.0 release branch and keeping development on master?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
Would it make sense to add optional support for sparse ndarrays and gradient
compression in `AbstractKVStore`? You mentioned not all frameworks support it.
Do you expect the API to change in the future?
--
You are receiving this because you are subscribed to this thread.
Reply to this email
Thank you @szha and @asmushetzel for looking through the RFC.
> Can you elaborate a bit more about specific use cases that this enables or
> simplifies? Is there something that can't be done today that this would
> enable? Are there major pain points that this would address compared to
>
Closing as issue was not picked up by the mailing list bridge.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/16375#issuecomment-538595967
Closed #16375.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/16375#event-2688951413
A new **deferred computation** (DC) argument to the imperative MXNet APIs is
proposed. If enabled, memory allocation and computation is deferred as long as
possible. Users can export the computational graph recorded during deferred
computation, which enables hybridization support.
Arrays for
telecommunication product
And then there are 3 expired trademarks that don't matter I suppose.
Does Apache typically start a trademarking process after incubation
finished?
> Hen
>
> On Fri, Aug 30, 2019 at 1:27 AM Leonard Lausen wrote:
>> The MXNet brand is currently also unr
Anton Chernov writes:
> As a physicist I would like to point out that "Gluon" means: An elementary
> particle that acts as the exchange particle for the strong force between
> quarks [1].
> As a general scientific term it can barely be seen as a candidate for
> trademark registration.
This
Carin recently noted that gluonhq.com already uses the Gluon brand for
end-to-end enterprise mobile solution and Marco found that they do so
apparently since at least 2015. Do you see any impact on the Gluon brand
for deep learning models?
The MXNet brand is currently also unregistered by Apache
Due to References: header the prior email was still sorted in the
discussion thread. Cancelling this and resending without that header.
Leonard Lausen writes:
> Marco de Abreu writes:
>> 1. Which Python version to support. 3.5 vs 3.6 is currently in the
>> discussion due to Ubu
Marco de Abreu writes:
> 1. Which Python version to support. 3.5 vs 3.6 is currently in the
> discussion due to Ubuntu 16.04 being shipped with 3.5 while the biggest
> market share being 3.6 as of now.
We could drop Python 2 even before deciding when to drop 3.5.
> 2. When to do the
Marco de Abreu writes:
> 1. Which Python version to support. 3.5 vs 3.6 is currently in the
> discussion due to Ubuntu 16.04 being shipped with 3.5 while the biggest
> market share being 3.6 as of now.
We could drop Python 2 even before deciding when to drop 3.5.
> 2. When to do the
Hi,
"Currently, we only support gcc-4.8 build." [1]
Do we ever want to change this? gcc-4.8 is now available since more than
6 years and a lot has happened during that time. Also platforms have
upgraded their default compiler versions, and gcc-7 is now commonly
available (eg. Ubuntu 18.04 LTS,
Lieven Govaerts writes:
> Hi,
>
> On Thu, 22 Aug 2019 at 17:01, Leonard Lausen wrote:
>
>> Hi,
>>
>> Pedro stated "Seems 3.6 is a reasonable choice." and there have been a
>> few +1 after Chaitanya's reply to Pedro. I would like to check if
To parallelize across machines: For GluonNLP we started submitting test
jobs to AWS Batch. Just adding a for-loop over the units in the
Jenkinsfile [1] and submitting a job for each [2] works quite well. Then
Jenkins just waits for all jobs to finish and retrieves their status.
This works since
Thanks to everyone who made their opinion known. So far the consensus
is that any nan handling in MXNet should not affect performance, at
least not by default.
This still leaves the question open if we should aim for documenting the
behavior of MXNet operators under presence of nan values. For
Currently the default kernel of nn.Embedding backward is known to be
buggy on P3 instances or using Cuda 9.2 (though the issue also occurs on
other instances with earlier version of Cuda, but less often).
https://github.com/apache/incubator-mxnet/issues/11314
There is currently an opt-in for
Hello MXNet community,
It seems that there is currently no agreed upon principle to handle
`nan` values in operators. This has led to inconsistencies between
operators and also to inconsistency over releases. Some operators ignore
nan values (eg. argmax), others treated it as maximum (e.g. topk
44 matches
Mail list logo