After some thoughts along the direction, I find a better and fun answer to the
above question: support tuple/ellipsis/slice in tvm ffi effectively.
I quickly hacked up a POC in https://github.com/tqchen/tvm/tree/pyffi that
supports the following benchmark script(disclaimer: it is only a POC so n
The following fast-path can be addressed in the TVM FFI:
- ```tuple``, ```list``` via translation in python/cython side (see benchmark
above)
- ```str``` is already fast (seem benchmark above)
- ```Context``` can be quite fast if the object is a tvm object, around same
mangitude as passing NDArra
@ptrendx Yes, there is an effort of profiling engine code flow using VTune. We
hope the exercise can pinpoint the hotspots that contribute to the most part of
latency. Further time split for pure C++ part between setup code (shape/type
inference, memory allocation, dependency setup) and op sched
> >
> > Unfortunately Zechen Wang just discovered another issue with GPU Pointwise
> > Fusion: https://github.com/apache/incubator-mxnet/issues/17105
> >
> > Thus, -1.
> >
> > Unfortunately, as the nightly release pipeline was broken until recently
>
Hi Ciyong, thanks for the proposal.
I like your suggestions. Will you be submitting a PR?
Some feedback:
* Regarding changing the URLs, let's avoid that. We just had a lot of
work trying to fix broken links.
* As far as changing the headings, sure, Tutorials and FAQs makes sense.
* Adding perform
yone who contributed
> to it.
>
> Unfortunately Zechen Wang just discovered another issue with GPU Pointwise
> Fusion: https://github.com/apache/incubator-mxnet/issues/17105
>
> Thus, -1.
>
> Unfortunately, as the nightly release pipeline was broken until recently
> (and
recently (and
still isn't re-set up completely yet), the issue hasn't been discovered earlier.
Przemysław may have a quick fix for the issue. Another option would be to
release 1.6 with MXNET_USE_FUSION default to 0.
Best regards
Leonard
On Wed, 2019-12-18 at 05:30 +, Chen, Ci
Zhao, Patric
Sent: Tuesday, December 17, 2019 8:51 AM
To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc0
Thanks, Tredak, I will add some words for the new feature in the release note.
+1 for voting because we hav
Shall we update the website installation page with nightly build
information as well (after we figure out the CD details)?
Best,
Haibin
On Tue, Dec 10, 2019 at 10:15 PM Lausen, Leonard
wrote:
> Not yet. As a community, we first need to add the nightly build hosting
> feature
> to the community
Thanks, Tredak, I will add some words for the new feature in the release note.
+1 for voting because we have ran multiple time of tests in local and got the
expected performance boost.
--Patric
> -Original Message-
> From: Przemysław Trędak
> Sent: Tuesday, December 17, 2019 4:49 AM
>
Once 1.6 release is complete, we will create a branch for MXNet 1.x for future
releases and start using master branch for 2.0 development.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-m
The status of MXNet 2.0 project is tracked at:
https://github.com/apache/incubator-mxnet/projects/18. The status for each
project will be updated by the contributor who's driving it. If you have more
projects that you intend to drive please first discuss here.
--
You are receiving this because
e convenient way for
> deployment.
> BTW, does it support backward ops too?
>
> -Ciyong
>
> -Original Message-
> From: Marco de Abreu marco.g.ab...@gmail.com>>
> Sent: Sunday, December 8, 2019 2:56 AM
> To: dev@mxnet.incubator.apache.org<mailto:dev@m
Hi Guanxin,
To subscribe the dev@ mail list, you need send request to
dev-subscr...@mxnet.apache.org .
More details here:
https://mxnet.incubator.apache.org/community/contribute#mxnet-dev-communications
Thanks,
-tao
On Fri, Dec 13, 2019 at 11:40 PM Qiao, Guanxin
wrote:
> Hi,
>
> I am Guanxin f
Invitation sent to jd...@amazon.com. Welcome to the community!
-tao
On Fri, Dec 13, 2019 at 11:40 PM Desai, Jay
wrote:
> Hi please add me on slack, I have high interest in mxnet for contribution
> and research. Thanks
>
the follow/up PR shouldn’t affect directly since the problem was taken care
of with the first PR., it was just something i did in the course that is
probably a good idea in general.
The dual linking probably can be addressed if it is found that the
performance returns. I would suggest that when do
That error should be fixed by Chris's work at
https://github.com/apache/incubator-mxnet/pull/17039
It is currently expected that libmxnet.so transitively requires both libomp.so
and libgomp.so. If this is an issue, we need to build OpenBLAS from source as
part of our build scripts, because it int
Hi Chris,
>From the licensing standpoint, llvm omp is indeed a choice. But previously
we noticed that building mxnet with cmake and the llvm omp under 3rdparty
folder will cause two runtimes linked to libmxnet.so [1]. Do you think
that's still a problem?
Also with the current two build systems, l
Hi Patric,
The llvm openmp we compile (originally from same Intel source as we all
know) seems to be Apache 2.0 licensed. Could we use that instead from a
licensing standpoint?
On Wed, Dec 11, 2019 at 10:36 PM Zhao, Patric wrote:
> Thanks, Sam.
>
> The root cause is from different OpenMP librar
Thanks, Sam.
The root cause is from different OpenMP library. Intel OpenMP will provide
better performance as your data shown.
Regarding release, since the license issue[1], we can't ship Intel OpenMP in
the binary, but the most of performance boost from MKLDNN is still available.
I think it sh
That's a bit what amalgamation part was for ? a simplified inference interface.
The last time I use amalgamation (some years ago) it was often break by update
and not really maintain.
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https:
Not yet. As a community, we first need to add the nightly build hosting feature
to the community run CD and then we can add the page so that the exact date
doesn't need to be specified.
I'm not sure what steps are required for this. Do we need to host the artifacts
on Apache's infrastructure? Or c
Is there a way to install the latest nightly package without having to
specify exact date?
Thanks,
Lin
On Sun, Dec 8, 2019 at 6:13 PM Lausen, Leonard
wrote:
> From Shanghai, the closest endpoint (automatically chosen endpoint) is in
> Tokyo
> and download speed for mxnet-mkl was on average 1.7
anks,
>
> --Patric
>
> > -Original Message-
> > From: Lin Yuan
> > Sent: Sunday, November 10, 2019 1:58 PM
> > To: dev@mxnet.incubator.apache.org
> > Subject: Re: BytePS-MXNet Integration
> >
> > Very interesting proposal. I have tried BytePS on so
>
>
> @stereomatchingkiss good point. What are you using c/c++ api for?
1. Develop stand alone app on desktop and mobile
2. Wrapper of another language(ex : php)
3. Run the inference task on aws lambda, we do not want to prune the libs of
python manually if we could build a slim library of mxne
@stereomatchingkiss good point. What are you using c/c++ api for?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/16167#issuecomment-563883331
Any plan to simplify the build of c and c++ api for mxnet2.0?It is hard(or very
hard) to build a working version of mxnet with cpp api on different
platforms(windows, linux, mac), every new release of the mxnet may or may not
break somethings and we need to spend many hours to figure out how to
Need to include a fix for the test error
https://github.com/apache/incubator-mxnet/pull/15921#pullrequestreview-328686634
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17006
ok. i see you answered below. was not obvious at first.
you can upgrade the llvm.
On Sun, Dec 8, 2019 at 6:51 PM Lausen, Leonard
wrote:
> Thanks Chris for the elaboration.
>
> > What the assert in question is trying to say is that mxnet code is
> calling
> > into omp library after a fork, but b
please answer the questions in my last email regarding the suspected issue
in mxnet as well as on that PR you opened.
On Sun, Dec 8, 2019 at 7:00 PM Lausen, Leonard
wrote:
> The assertion failure in the MXNet DEBUG build goes away by updating LLVM
> OpenMP
> to the latest released version. All
The assertion failure in the MXNet DEBUG build goes away by updating LLVM OpenMP
to the latest released version. All evidence I have points to the assertion
failure being due to a bug in the 2 years old UNRELEASED version of LLVM OpenMP.
that we are using currently in CMake builds.
Thus I'm reques
Thanks Chris for the elaboration.
> What the assert in question is trying to say is that mxnet code is calling
> into omp library after a fork, but before the omp library’s atfork()
> handler is called, so the omp library has not yet initialized a new team if
> threads. This looks to be the case
From Shanghai, the closest endpoint (automatically chosen endpoint) is in Tokyo
and download speed for mxnet-mkl was on average 1.7 MB/s with a maximum of 5
MB/s during my test.
On Sun, 2019-12-08 at 01:30 +, Sheng Zha wrote:
> > Heres a set of links for today’s builds
> >
> > (Plain mxnet, n
Great investigation thank you. I have to agree with your analysis and for
helping resolving this long standing issue.
This will not repair the damage made to the community of losing 3-4
valuable contributors. Introducing a library that causes bugs then blocking
changes and locking gh issues which
Hi Leonard.
Are you saying that you have updated this library and the problems desribed
in the related tickets are no longer present?
P.
On Sunday, December 8, 2019, Lausen, Leonard
wrote:
> Thanks Pedro and Chris for your responses.
>
> After further investigation I find:
>
> 1) I don't think
This is actually useful information, thanks.
Still I don't see a justificqtion for vetoing being able to choose the
library at compile time. Fixing the issue you reasonably describe and being
able to choose are two orthogonal topics.
Thanks for the constructive information.
On Sunday, December
g<mailto:dev@mxnet.incubator.apache.org>
Subject: Re: Custom C++ Operators
Awesome project, love it! It really seems easy to use, great job!
-Marco
Skalicky, Sam mailto:sska...@amazon.com.invalid>>
schrieb am Sa., 7. Dez. 2019,
19:50:
Hi MXNet Community,
We have been working on adding support
Thanks Sheng,
Looks like 12/8 builds are working as expected too:
(Plain mxnet, no mkl no cuda)
https://repo.mxnet.io/dist/2019-12-08/dist/mxnet-1.6.0b20191208-py2.py3-none-manylinux1_x86_64.whl
(mxnet-mkl)
https://repo.mxnet.io/dist/2019-12-08/dist/mxnet_mkl-1.6.0b20191208-py2.py3-none-manylinux
btw the call stack I am referring to below is the one where I explained
this problem before and after I got a hostile response, I locked the issue.
On Sun, Dec 8, 2019 at 7:24 AM Chris Olivier wrote:
> Again, here is what I suspect the bug is in mxnet:
>
> The way that advanced openmp libraries
Again, here is what I suspect the bug is in mxnet:
The way that advanced openmp libraries handle a fork is that they hook an
atfork() callback in which, in the new process, it creates a new “team” of
threads to use for its thread pool (since all of the thread handles in its
data structure belong t
Really great features, it will provide a more convenient way for deployment.
BTW, does it support backward ops too?
-Ciyong
-Original Message-
From: Marco de Abreu
Sent: Sunday, December 8, 2019 2:56 AM
To: dev@mxnet.incubator.apache.org
Subject: Re: Custom C++ Operators
Awesome
Thanks Pedro and Chris for your responses.
After further investigation I find:
1) I don't think https://github.com/apache/incubator-mxnet/issues/14979 is
caused by any incompatibility between gomp and llvm / intel omp. Rather it's
simply a problem of llvm / intel omp. See my comment to the issue
I do expect the API to change in the future. Currently @szhengac @zhongyuchen
and I are exploring APIs for gradient compression with a few algorithms, and we
may bring back the best practice back to MXNet.
--
You are receiving this because you are subscribed to this thread.
Reply to this email
How's this project going?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/16376#issuecomment-562906794
> Heres a set of links for today’s builds
>
> (Plain mxnet, no mkl no cuda)
> https://apache-mxnet.s3-us-west-2.amazonaws.com/dist/2019-12-07/dist/mxnet-1.6.0b20191207-py2.py3-none-manylinux1_x86_64.whl
> (mxnet-mkl)
> https://apache-mxnet.s3-us-west-2.amazonaws.com/dist/2019-12-07/dist/mxnet_mkl-
Hi @samskalicky , thank you for the contribution!
I have several suggestions.
- custom GPU operators
1. Provide CUDA stream in `OpResource`.
2. Share the same function on CPU and GPU.
Users can discriminate the context by `MXTensor::dltensor::ctx`
- Call framework specific math helper
Stop disseminating false information:
https://github.com/apache/incubator-mxnet/issues/14979
On Sat, Dec 7, 2019 at 7:04 AM Chris Olivier wrote:
> -1
>
> mkldnn removed omp5 for licencing issues
> no bugs have actually been traced to the use of llvm openmp. only an assert
> caused by an actual
Hi Marco,
I believe this CodeBuild solution is a stop-gap until the Jenkins CD project
that Per and Sheng have been driving is finished. There were some failing
builds with the Jenkins CD that were preventing some nightly builds from being
available. The long term goal is to get back to the Jen
Awesome project, love it! It really seems easy to use, great job!
-Marco
Skalicky, Sam schrieb am Sa., 7. Dez. 2019,
19:50:
> Hi MXNet Community,
>
> We have been working on adding support for custom C++ operators for a
> while and are happy to announce that the initial functionality is now
> a
Could you elaborate how a non-Amazonian is able to access, maintain and
review the CodeBuild pipeline? How come we've diverted from the community
agreed-on standard where the public Jenkins serves for the purpose of
testing and releasing MXNet? I'd be curious about the issues you're
encountering wi
Hi MXNet Community,
We have been working on getting nightly builds fixed and made available again.
We’ve made another system using AWS CodeBuild & S3 to work around the problems
with Jenkins CI, PyPI, etc. It is currently building all the flavors and
publishing to an S3 bucket here:
https://us-
Chris, I'm trying to understand the situation better exactly because I think
this bug is important and I would like to address it. Therefore I asked you a
question, expecting your answer would be helpful to solve this problem.
Unfortunately it seems to me that your answer misses the point of my que
if it is really a problem, then it would be prioritized. all the necessary
info is in that issue (and i already mentioned just yesterday or today on
that ticket) what it was again and it’s like i was talking to no one, as it
has been, simply an immediate revert to “remove the library”. in the time
Chris, if you can fix this in a small fraction of a time, please go ahead and do
so. Could you clarify why you think Intel's statement is nonsense or not
applicable? "Because different OpenMP runtimes may not be binary-compatible,
it's important to ensure that only one OpenMP runtime is used throug
btw trying to override a veto with a “lazy consensus” is not a valid
approach.
On Fri, Dec 6, 2019 at 8:44 PM Lausen, Leonard
wrote:
> I think it's reasonable to assume that the Intel MKLDNN team is an
> "authorative"
> source about the issue of compilation with OpenMP and the OpenMP runtime
> l
-1
mkldnn removed omp5 for licencing issues
no bugs have actually been traced to the use of llvm openmp. only an assert
caused by an actual bug in mxnet code. there are suitable workarounds.
over time llvm omp has simply been used as a “catch all” for random
problems that aren’t related at all (s
i test 3.12.2 3.13.3 3.14.2 3.15.5
shiwen hu 于2019年12月7日周六 下午7:28写道:
> yes.
>
> Lausen, Leonard 于2019年12月7日周六 下午7:20写道:
>
>> Do you mean starting 3.15.5 it works fine?
>> The image you attached doesn't display on my end.
>>
>> On Dec 7, 2019 19:12, shiwen hu wrote:
>> [image.png]
>>
>> I teste
yes.
Lausen, Leonard 于2019年12月7日周六 下午7:20写道:
> Do you mean starting 3.15.5 it works fine?
> The image you attached doesn't display on my end.
>
> On Dec 7, 2019 19:12, shiwen hu wrote:
> [image.png]
>
> I tested these versions. until 3.15.5 is working fine.
>
> shiwen hu mailto:yajiedes...@gma
Do you mean starting 3.15.5 it works fine?
The image you attached doesn't display on my end.
On Dec 7, 2019 19:12, shiwen hu wrote:
[image.png]
I tested these versions. until 3.15.5 is working fine.
shiwen hu mailto:yajiedes...@gmail.com>> 于2019年12月7日周六
下午1:24写道:
Now, other problems are solve
[image: image.png]
I tested these versions. until 3.15.5 is working fine.
shiwen hu 于2019年12月7日周六 下午1:24写道:
> Now, other problems are solved by modifying CMakeLists.txt.but The command
> line is too long problem must update cmake.However I don't know which
> minimum version fixed the problem.I
Now, other problems are solved by modifying CMakeLists.txt.but The command
line is too long problem must update cmake.However I don't know which
minimum version fixed the problem.I try to do some tests to find out the
minimum version.
Pedro Larroy 于2019年12月7日周六 上午3:52写道:
> CMake shipped with ubu
Thanks Pedro for pointing out the problems with old CMake versions. I find that
the popular Deep Learning AMIs provided on AWS, while based on Ubuntu 16.04 and
18.04, come with a updated version of CMake (3.13.3) pre-installed.
CMake 3.13 was released more than 1 year ago. Anyone with an older ver
I think it's reasonable to assume that the Intel MKLDNN team is an "authorative"
source about the issue of compilation with OpenMP and the OpenMP runtime library
related issues. Thus I suggest we follow the recommendation of Intel MKLDNN team
within the MXNet project.
Looking through the Intel MKL
I will try to stay on the sidelines for now since previous conversations
about OMP have not been productive here and I have spent way too much time
on this already, I'm not the first one giving up on trying to help with
this topic.
I would be glad if you guys can work together and find a solution.
CMake shipped with ubuntu has issues when compiling with CUDA on GPU
instances. I wouldn't recommend anything older than 3.12 for Linux GPU
https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_core.sh#L63
I don't know about windows CMake version but would make sense to
Hi all. CI is back to normal after Jake's commit:
https://github.com/apache/incubator-mxnet/pull/16968 please merge from
master. If someone could look into the TVM building issues described
above would be great.
On Tue, Dec 3, 2019 at 11:11 AM Pedro Larroy
wrote:
> Some PRs were experiencing b
Is this related to https://github.com/apache/incubator-mxnet/issues/10856?
I unlocked that Github issue based on the Apache Code of Conduct
https://www.apache.org/foundation/policies/conduct#specific-guidelines
On Sat, 2019-11-30 at 02:47 -0800, Pedro Larroy wrote:
> (py3_venv) piotr@34-215-197
Currently we declare cmake_minimum_required(VERSION 3.0.2)
I'm in favor of updating our CMake requirement. The main question may be what
new version to pick as minimum requirement.
In general, there is the guideline
> You really should at least use a version of CMake that came out after your
> c
Thanks for the thoughtful and valuable comments @arcadiaphy.
> I've deployed many models with scala API, and run them in multiple threads.
> The whole system has run smoothly in production environment for more than 2
> months.
> The backend of inference is graph executor, which is created for e
@anirudh2290 Just see this RFC. Let me share what I've done in multithreaded
infererce, I think it's the only viable way now in mxnet.
I've deployed many models with scala API, and run them in multiple threads. The
whole system has run smoothly in production environment for more than 2 months.
We don't loose pip by hosting on S3. We just don't host nightly releases on Pypi
servers and mirror them to several hundred mirrors immediately after each build
is published which is very expensive for the Pypi project.. People can still
install the nightly builds with pip by specifying the -f opti
Are weekly releases an option? It was brought up as concern that we might
lose pip as a pretty common distribution channel where people consume
nightly builds. I don't feel like that concern has been properly addressed
so far.
-Marco
Lausen, Leonard schrieb am Mi., 4. Dez. 2019,
04:09:
> As a s
As a simple POC to test distribution, you can try installing MXNet based on
these 3 URLs:
pip install --no-cache-dir
https://mxnet-dev.s3.amazonaws.com/mxnet_cu101-1.5.1.post0-py2.py3-none-manylinux1_x86_64.whl
pip install --no-cache-dir
https://mxnet-dev.s3-accelerate.amazonaws.com/mxnet_cu101-
CloudFront launched endpoints in Hong Kong during 2019. Based on my tests, they
have excellent connectivity to mainland China [1]. Thus we don't need to roll
our own geo-location based DNS server solution at this moment (though the great
firewall policy may change and we need to adapt).
Best regar
This has the following effect:
- the unfortunate group of user whose GPUs' SMs were removed would be
sacrificed so that import of mxnet on their machine would take quite some time
on the order of hours. we don't have the usage information to guide which SMs
to drop.
- it would cut down our binar
What about cutting down on SMs as recommended by Kellen?
Sheng Zha schrieb am Di., 3. Dez. 2019, 20:15:
> This is certainly one way to do it. However, the binary size limits our
> ability to publish pypi. So assuming that we want to have our binary on
> pypi still, we'd have to convince pypa to
This is certainly one way to do it. However, the binary size limits our ability
to publish pypi. So assuming that we want to have our binary on pypi still,
we'd have to convince pypa to raise our limits. Thus, it seems to me that this
hypothetical vote with respect to stopping nightly publish to
Some PRs were experiencing build timeouts in the past. I have diagnosed
this to be a saturation of the EFS volume holding the compilation cache.
Once CI is back online this problem is very likely to be solved and you
should not see any more build timeout issues.
On Tue, Dec 3, 2019 at 10:18 AM Ped
Excellent! Could we maybe come up with a POC and a quick writeup and then
start a proper vote after everyone verified that it covers their use-cases?
-Marco
Sheng Zha schrieb am Di., 3. Dez. 2019, 19:24:
> Yes, there is. We can also make it easier to access by using a
> geo-location based DNS s
Yes, there is. We can also make it easier to access by using a geo-location
based DNS server so that China users are directed to that local mirror. The
rest of the world is already covered by the global cloudfront.
-sz
On 2019/12/03 18:22:22, Marco de Abreu wrote:
> Isn't there an s3 endpoint
Isn't there an s3 endpoint in Beijing?
It seems like this topic still warrants some discussion and thus I'd prefer
if we don't move forward with lazy consensus.
-Marco
Tao Lv schrieb am Di., 3. Dez. 2019, 14:31:
> * For pypi, we can use mirrors.
>
> On Tue, Dec 3, 2019 at 9:28 PM Tao Lv wrote
Also please take note that there's a stage building TVM which is executing
compilation serially and takes a lot of time which impacts CI turnaround
time:
https://github.com/apache/incubator-mxnet/issues/16962
Pedro
On Tue, Dec 3, 2019 at 9:49 AM Pedro Larroy
wrote:
> Hi MXNet community. We are
Hi MXNet community. We are in the process of updating the base AMIs for CI
with an updated CUDA driver to fix the CI blockage.
We would need help from the community to diagnose some of the build errors
which don't seem related to the infrastructure.
I have observed this build failure with tvm whe
* For pypi, we can use mirrors.
On Tue, Dec 3, 2019 at 9:28 PM Tao Lv wrote:
> As we have many users in China, I'm considering the accessibility of S3.
> For pip, we can mirrors.
>
> On Tue, Dec 3, 2019 at 3:24 PM Lausen, Leonard
> wrote:
>
>> I would like to remind everyone that lazy consensus
As we have many users in China, I'm considering the accessibility of S3.
For pip, we can mirrors.
On Tue, Dec 3, 2019 at 3:24 PM Lausen, Leonard
wrote:
> I would like to remind everyone that lazy consensus is assumed if no
> objections
> are raised before 2019-12-05 at 05:42 UTC. There has been
I would like to remind everyone that lazy consensus is assumed if no objections
are raised before 2019-12-05 at 05:42 UTC. There has been some discussion about
the proposal, but to my understanding no objections were raised.
If the proposal is accepted, MXNet releases would be installed via
p
t; suppose we can once again provide the whole link, but getting directly
> > > from
> > > pip is the familiar experience for most devs.
> > >
> > > Yes, 1.6 is the target release, but I don't see a world where the team can
> > > create new operato
e for most devs.
>>
>> Yes, 1.6 is the target release, but I don't see a world where the team can
>> create new operators, and then get it pushed out to stable fast enough for
>> the
>> book writers.
>>
>> Sincerely,
>>
>> Alex Chun
m: Lausen, Leonard
> Sent: Sunday, December 1, 2019 10:08 PM
> To: dev@mxnet.incubator.apache.org
> Cc: Kamakoti, Balaji
> Subject: Re: Stopping nightly releases to Pypi
>
> If we decide to do weekly pre-release builds to Pypi, what's the benefit? To
> catch bugs and p
Quoting Dustin from Pypi: "Hi folks, this is a really big ask. The mxnet-*
projects already represent a huge portion of PyPI's total size on disk and in
terms of bandwidth. Per https://pypi.org/stats/, the mxnet-* projects total more
than 1.5TB of PyPI's 6.5TB total size."
Given these numbers, red
: dev@mxnet.incubator.apache.org
Cc: Kamakoti, Balaji
Subject: Re: Stopping nightly releases to Pypi
If we decide to do weekly pre-release builds to Pypi, what's the benefit? To
catch bugs and pinpoint when they were introduced, having weekly builds may be
too coarse. So people would likely prefer t
If we decide to do weekly pre-release builds to Pypi, what's the benefit? To
catch bugs and pinpoint when they were introduced, having weekly builds may be
too coarse. So people would likely prefer the nightly releases and install them
from S3 via: pip install --pre mxnet-cu101 -f
http://mxnet.s3.
Makes sense to me to release nightlies to s3 only. Can we reduce size by
cutting down on the SMs we release? Was the main complaint around cuda release
sizes?
On Dec 1, 2019 9:43 PM, "Lausen, Leonard" wrote:
Hi MXNet Community,
since more than 2 months our binary Python nightly releases publ
Hi Leonard,
Is there any reason why we shouldn't take both options? Ie we do weekly build
on PyPi and provide the s3 option. I would be inclined to make sure we provide
as many avenues as possible to reduce friction for developers. The d2l.ai book
by Alex Smola is attracting a community that so
nks!
> -Ciyong
>
> -Original Message-
> From: Zhao, Patric
> Sent: Monday, November 18, 2019 2:10 PM
> To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> Subject: RE: RE: MXNet 1.6.0 release
>
> Plan to cherry-pick below PR into 1.6. Please take a
PM
To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
Subject: RE: RE: MXNet 1.6.0 release
Plan to cherry-pick below PR into 1.6. Please take a review.
https://github.com/apache/incubator-mxnet/pull/16837
> -Original Message-
> From: Tan, Jonathan
> Sent: Monday, November 18
Hi MXNet Community,
currently MXNet provides binary nightly releases of the master branch on pypi.
Do we have any binary nightly releases of the 1.6 branch available (eg on a S3
bucket)?
As the 1.6 branch and the master branch start to diverge, it would be good for
downstream projects that target
Good suggestion. I have a draft in local for website improvements and I will
add this part as well.
My team member will send out the proposal in dev@ soon 😊
From: Chaitanya Bapat
Sent: Monday, November 25, 2019 9:35 AM
To: dev@mxnet.incubator.apache.org
Cc: u...@mxnet.apache.org
Subject: Re
Aaron who has led the work on latest MXNet website, maybe you can recommend
where we could place this (I know it can be added as a markdown file that
will be picked by the website) but if we could stitch this up in a correct
way quickly, that would be great!
What do you think?
On Sun, 24 Nov 2019
It’s great we have a full list about MXNet applications.
I think it will be better MXNet community can maintain an official list in the
MXNet website.
Thanks,
--Patric
From: Chaitanya Bapat
Sent: Monday, November 25, 2019 8:36 AM
To: dev@mxnet.incubator.apache.org; u...@mxnet.apache.org
Subje
601 - 700 of 5595 matches
Mail list logo