Just realized I did not actually link to the issue I mentioned, it is
https://github.com/apache/incubator-mxnet/issues/17507
On 2020/05/01 18:19:27, Przemys��aw Tr��dak wrote:
> Hi Naveen,
>
> The problem that you see with loss is due to the fact that the model clips
> the gradient, which in
Hi Naveen,
The problem that you see with loss is due to the fact that the model clips the
gradient, which in the case of AMP is scaled by the loss scale. In order for it
to work you need to apply the same loss scale to the value you are using to
clip the gradients. This is currently possible
I added GPU vectorization of elementwise ops to the 1.7 scope, but will need a
small extension - the PR should be ready to merge to master and cherry-picked
to 1.7 early next week.
Thanks
Przemek
On 2020/04/09 08:27:58, "Chen, Ciyong" wrote:
> Hi dev,
>
> Just a remainder for you to do the
I personally like the idea of opt-in more than opt-out:
- ultimately PR author wants the PR to be merged so they (or committer
reviewing the PR) will trigger the CI
- if it is easy to trigger the PR via the bot command then the amount of work
per PR should be less than with opt-out (since most
Wait, what?
That's not right - I was even discussing this with Sheng yesterday (as I did
not immediately see that page, which I agree is a problem, but still, that page
exists). It is https://mxnet.incubator.apache.org/get_started/download
Przemek
On 2020/03/04 21:11:04, Zach Kimberg wrote:
Dear All,
The Apache MXNet (incubating) community is happy to announce Apache MXNet
(incubating) version 1.6.0!
Release blog post:
https://medium.com/apache-mxnet/apache-mxnet-1-6-0-release-is-now-available-f48f1dd16dd0
Apache MXNet (incubating) is a deep learning framework designed for both
Hi Denis,
Could this bot be smart enough to first do the sanity pipeline (to catch stuff
like lint errors etc.) before launching the full thing?
Thanks
Przemek
On 2020/02/12 18:12:07, "Davydenko, Denis"
wrote:
> Hello, MXNet dev community,
> As you all know, the experience with CI
download artifacts from Apache dist repo;
> > > > 2. the signature looks good;
> > > > 3. build from source code with MKL-DNN and MKL on centos;
> > > > 4. run fp32 and int8 inference of ResNet50 under
> > /example/quantization/.
> > > >
> >
tar and signature are missing from the tag.
>
> On Fri, Jan 31, 2020 at 11:09 AM Przemysław Trędak
> wrote:
>
> > Dear MXNet community,
> >
> > This is the vote to release Apache MXNet (incubating) version 1.6.0.
> > Voting starts today and will close on
Dear MXNet community,
This is the vote to release Apache MXNet (incubating) version 1.6.0. Voting
starts today and will close on Monday 2/3/2020 23:59 PST.
Link to release notes:
https://cwiki.apache.org/confluence/display/MXNET/1.6.0+Release+notes
Link to release candidate:
Dear MXNet community,
I'm happy to announce the results of the vote.
This vote passes with 13 +1 votes (5 binding) and no 0 or -1 votes.
+1 votes
* Zhi Zhang / binding
* Qing Lan / binding
* Markus Weimer / binding
* Haibin Lin / binding
* Jun Wu / binding
* Lin Yuan
* Lai Wei
* Xinyu Chen
*
2019 8:51 AM
> > > To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> > > Subject: RE: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc0
> > >
> > > Thanks, Tredak, I will add some words for the new feature in the release
> > note.
Dear MXNet community,
This is the vote to release Apache MXNet (incubating) version 1.6.0. Voting
starts now and will close on Friday, 20th December 2019 23:59:59 PST.
Link to release notes:
https://cwiki.apache.org/confluence/display/MXNET/1.6.0+Release+notes
Link to release candidate:
Dear MXNet Community,
>From talking to different Members of the Community, I realized there is a
>misunderstanding of what "code freeze" actually means. Let me try to clear
>this confusion in this email.
The code freeze does not mean "1.6 release is done, let's vote on it and ship
it as-is".
Dear MXNet Community,
This morning I updated the 1.6.x branch and so the code freeze is in effect. I
would like to thank everyone who helped in preparing and reviewing pull
requests to meet this deadline.
Unfortunately, nightly tests do not currently pass (I created an issue about
this: [1]).
hao, Patric
> > Sent: Friday, November 1, 2019 12:13 PM
> > To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> > Subject: RE: RE: MXNet 1.6.0 release
> >
> > Sure, I will see the issue.
> >
> > > -Original Message-
> > > From: Przemysł
-mxnet/issues/16049 was fixed in master
> branch yesterday. Can we include it for the 1.6 release?
>
> Thank you
> Leonard
>
> On Fri, 2019-10-25 at 14:24 +, Przemysław Trędak wrote:
> > Dear MXNet Community
> >
> > Last night I updated 1.6.x branch to p
ogress.
>
> Feel free to ping me if anything we can help.
>
> Thanks,
>
> --Patric
>
> > -Original Message-
> > From: Przemysław Trędak
> > Sent: Friday, October 25, 2019 10:25 PM
> > To: d...@mxnet.apache.org
> > Subject: Re: MXNet 1.6.0
Dear MXNet Community
Last night I updated 1.6.x branch to point to current master. The code freeze
is now in effect.
That said, since most of the features intended for 1.6 release are still not
fully finished (a few PRs for BERT GPU performance, multiple MKLDNN PRs,
multiple PRs tagged NumPy
Oooh, that is why we did not see this button... Could you add me and Dick?
Thank you!
Przemek
On 2019/10/23 23:11:32, Marco de Abreu wrote:
> We can't use the role feature of GitHub, thus committers have to be added
> manually by an admin.
>
> Lausen, Leonard schrieb am Do., 24. Okt. 2019,
>
Hi MXNet Community,
As the 1.5.1 patch release is done (many thanks Tao!), it is time to prepare
for the next minor release of MXNet - 1.6.0.
I (ptrendx@github / ptredak@mxnet Slack) would like to manage the release of
1.6.0. As it will be the first time for me to manage a release, Sam
Is there any other PR that fails because of those tests? Can you reproduce the
failure without your PR? It seems pretty strange to me to disable a test if
there is no explanation of why the test failure is unrelated to the PR...
On 2019/10/07 20:35:33, Anirudh Acharya wrote:
> Hi Sam, Lin and
There seems to be a problem with (at least Python, did not check others) APIs.
For example this page:
https://mxnet.incubator.apache.org/api/python/docs/api/symbol/_autogen/mxnet.symbol.Symbol.argmax.html
says that it is a convenience method for argmax (with a link), but clicking
that link just
+1
Compiled and tested CUDA build. No problems encountered.
On 2019/07/09 21:10:55, Qing Lan wrote:
> Have successfully fixed the issue on OSX.
>
> Scala/Java build is fine:
>
> osx-cpupassed (Qing)
> linux-cpu passed (Zach)
> linux-gpu passed (Zach)
>
> +1 for the release.
>
>
-1
There is a crash in sockeye unit test (python setup.py test) observed starting
with nightly 1.5 build from 6/13 and still occuring in 1.5rc1. I don't yet have
the exact commit that is responsible for it, but it is either
a862270beb2d796c1ba311183f7f4a766a18ad6c (dlpack related) or
ded what the default should be."
> Any hints for the user?
What would you suggest?
> Is it possible to switch off some of fusion by user or add more?
Do you mean like "fuse add and sigmoid only" or do only the pointwise fusion vs
some other fusion?
>
> Thanks,
>
>
Hello Community,
DL models, besides compute intensive operations like convolutions and fully
connected layers, feature a lot of simple pointwise (aka elementwise)
operations (like elementwise addition etc.). Performance of those operations is
fully memory bandwidth bound and so it limits
Hi Sam and Zhennan,
the problem is not how to implicitly produce a list of all operators not in any
other list - that is easy and the code Zhennan provided would work. The problem
is that such list would not be actually correct in all cases - you do NOT want
optimizers to land in FP32_FUNCS
Thank you all for responding.
Let me address a few misunderstandings about AMP in the discussion so far:
- I think the main misunderstanding in the discussion so far is that it would
be somehow possible to implicitly cast to FP32 all previously unseen operators.
I would much prefer such
Dear Community,
One of the recently merged features of the 1.5 release, AMP (Automatic Mixed
Precision) support (PR [1], design doc [5]), introduced a requirement that
every new operator added to MXNet would need to be present in 1 of the lists
(in [2]). To make sure that this requirement is
30 matches
Mail list logo