Congratulations, Sam, thanks all of your great works in MXNet
> -Original Message-
> From: Chaitanya Bapat
> Sent: Thursday, July 30, 2020 1:12 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [Announcement] New Committer - Sam Skalicky
>
> Congratulations Sam! Well deserved!
>
Several peoples in below list are from Intel and I have added them into CC.
Sheng, you can contact with them for ICLA.
Thanks,
--Patric
> -Original Message-
> From: Sheng Zha
> Sent: Monday, July 27, 2020 5:33 AM
> To: Justin Mclean
> Cc: d...@mxnet.apache.org; Wall Michael ; Bob
+1
Passed the performance benchmarking for CPU tests and no regression is found.
> -Original Message-
> From: Aston Zhang
> Sent: Sunday, July 19, 2020 1:45 PM
> To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org; Bob Paulin ; Henri Yandell
> ; Jason Dai ; Markus Weimer
>
I have tried it and It's really useful
Thanks for the improvements, Yang.
> -Original Message-
> From: sandeep krishnamurthy
> Sent: Thursday, May 21, 2020 2:42 PM
> To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org
> Subject: Re: Global Search Now Available on MXNet
Agree, we should add the selection in the installation page for nightly build.
https://mxnet.apache.org/get_started?version=master=linux=python=pip=cpu#
> -Original Message-
> From: Haibin Lin
> Sent: Tuesday, December 17, 2019 2:40 PM
> To: dev@mxnet.incubator.apache.org
> Cc:
From my view, performance is a big plus for MXNet and the reason why lots of
people adopted in MXNet.
I still think we need to have a top-level class for "performance".
Thanks,
--Patric
> -Original Message-
> From: Chen, Ciyong
> Sent: Monday, December 23, 2019 12:08 PM
> To:
Thanks, Tredak, I will add some words for the new feature in the release note.
+1 for voting because we have ran multiple time of tests in local and got the
expected performance boost.
--Patric
> -Original Message-
> From: Przemysław Trędak
> Sent: Tuesday, December 17, 2019 4:49 AM
>
Thanks, Sam.
The root cause is from different OpenMP library. Intel OpenMP will provide
better performance as your data shown.
Regarding release, since the license issue[1], we can't ship Intel OpenMP in
the binary, but the most of performance boost from MKLDNN is still available.
I think it
It’s great we have a full list about MXNet applications.
I think it will be better MXNet community can maintain an official list in the
MXNet website.
Thanks,
--Patric
From: Chaitanya Bapat
Sent: Monday, November 25, 2019 8:36 AM
To: dev@mxnet.incubator.apache.org; u...@mxnet.apache.org
Hi MXNet community,
The release 1.6 is WIP and will be released soon. I think it’s time to discuss
the roadmap of 1.7.
I have created a github thread (#16864) for the new feature discussion.
Feel free to add your plan in it
https://github.com/apache/incubator-mxnet/issues/16864
Thanks,
On Tue, Nov 19, 2019 at 8:08 AM Chris Olivier
> > wrote:
> >
> >> Thanks, Patric. I was just trying to point out that there was
> >> currently no guarantee of deterministic results without MKL, so
> >> there’s not necessarily an expectation of determinism with MKL (ie
or your work over the years to make
> >> > > >> MXNet
> >> fast
> >> > > with
> >> > > >> MKLDNN!
> >> > > >>
> >> > > >> I think it would be great to make MKLDNN enabled by default.
> >>
; > > >> what
> > do
> > > > you
> > > >> propose we call the build without MKLDNN? mxnet-nomkl?
> > > >>
> > > >> Thanks!
> > > >> Sam
> > > >>
> > > >>> On Nov 18, 2019, at 11:08
Hi MXNet community,
>From the first MKLDNN backend integrated in release 1.2, the community is
>continuously improving the quality and performance of MKLDNN CPU backend.
Nowadays, the MKLDNN backend is widely used for the inference, especially for
INT8 inference, and we got lots of very
e
>
> Hi MXNet Community,
>
> I’ve been doing some testing on the performance of MXnet 1.6.x vs 1.5.1 and
> I noticed some regression in training. You can find more details here:
> https://github.com/apache/incubator-mxnet/issues/16845
>
> Thanks,
> Jonathan
>
> O
by
> Hao -
> https://github.com/apache/incubator-mxnet/pull/16711
> It would be great to have that cherry-picked.
> Thanks
> Chai
>
>
> On Thu, 7 Nov 2019 at 17:33, Zhao, Patric wrote:
>
> > Thanks for the great efforts.
> >
> > I think belo
I read the proposal but little technical statement about why BytePS is better
than Horovod or other HW provided libraries.
It will be better if more technical details of BytePS can be introduced in the
proposal.
Thanks,
--Patric
> -Original Message-
> From: Lin Yuan
> Sent: Sunday,
Thanks for the great efforts.
I think below PR need to be backported to 1.6 for bugfix in large tensor
supports.
https://github.com/apache/incubator-mxnet/pull/16737
--Patric
> -Original Message-
> From: Przemysław Trędak
> Sent: Friday, November 8, 2019 5:46 AM
> To:
The issue is fixed by https://github.com/apache/incubator-mxnet/pull/16693
Does the last nightly build and tests pass?
Thanks,
--Patric
> -Original Message-
> From: Zhao, Patric
> Sent: Friday, November 1, 2019 12:13 PM
> To: dev@mxnet.incubator.apache.org; d...@mxne
lease take a look at this.
>
> Przemek
>
> On 2019/11/01 02:41:01, "Zhao, Patric" wrote:
> > Hi Przemek,
> >
> > The MKLDNN upgrade PR was merged in Oct 31. Please double check the
> nightly build and going forward for the release progress.
>
Hi Przemek,
The MKLDNN upgrade PR was merged in Oct 31. Please double check the nightly
build and going forward for the release progress.
Feel free to ping me if anything we can help.
Thanks,
--Patric
> -Original Message-
> From: Przemysław Trędak
> Sent: Friday, October 25, 2019
Thanks, Przemek.
We're catching up for MKL-DNN upgrade parts but currently the unstable CI is
slowdown our development progress a lot.
Hope we can merge all PRs in next week if CI is back to work soon.
I will update to you for our progress.
Thanks,
--Patric
> -Original Message-
>
gt; All the best,
>
> Thomas Delteil
>
> Le lun. 7 oct. 2019 à 19:58, Zhao, Patric a écrit :
>
> > I find there is no "search bar" in the website today.
> >
> > Could anyone check it?
> >
> > Thanks,
> >
> > --
gt; >
> > > > > > > In the meanwhile, any help is appreciated, and more than the
> > value
> > > of
> > > > > the
> > > > > > > fixes, let me repeat that there is tremendous value in
> > > > > > > having
> > more
> > > > &
For the install page [1], I suggest to add the selection of backend DeepNumpy
[2] which will be more clean.
[1] http://mxnet.incubator.apache.org/index.html
[2] https://numpy.mxnet.io/#installation
> -Original Message-
> From: kellen sunderland
> Sent: Monday, September 23, 2019
Congratulations, Tao!
> -Original Message-
> From: Sheng Zha
> Sent: Monday, September 23, 2019 12:07 PM
> To: d...@mxnet.apache.org
> Subject: [Announcement] New PPMC Member - Tao Lv
>
> Hi all,
>
> Please join me in welcoming Tao Lv as a new PPMC member of Apache
> MXNet
Minor suggestion:
I think we can add more in features page to attract the user and also highlight
the differentiation of MXNet.
Something like quantization, faster inference and training, horovod supporting,
AMP, automatic fusion in the fly...
http://mxnet.incubator.apache.org/features
+1
Tested MKLDNN backend and everything looks great.
> -Original Message-
> From: Qing Lan
> Sent: Wednesday, September 18, 2019 2:20 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [VOTE] Release Apache MXNet (incubating) 1.5.1.rc0
>
> +1 for Scala/Java test. Passed all tests
Hi Aaron,
Recently, we are working on improving the documents of CPU backend based on the
current website.
I saw there're several PRs to update the new website and it's really great.
Thus, I'd like to know when the new website will online.
If it's very near, we will switch our works to the
once the branch is frozen. I'm not sure if there are any
> other performance tests.
>
> On Mon, Aug 12, 2019 at 9:36 PM Marco de Abreu
>
> wrote:
>
> > Hi Patric,
> >
> > CI should automatically pick up the branch and validate it as usual.
> >
> > Bes
It's great works, Tao
Regarding the open issue, is there default code owner/maintainer? If so, he/she
will be the right people to look into the issue.
https://github.com/apache/incubator-mxnet/blob/master/CODEOWNERS
Do we have regularly build, run, functionality and performance testing for
Congratulation, Lai.
Well done for the very challenge release 1.5 and you make the progress going
smoothly
> -Original Message-
> From: kellen sunderland
> Sent: Sunday, August 4, 2019 9:32 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [Announcement] New Committer - Lai
+1
Tested MXNet with MKLDNN backend for fp32/int8 inference and training coverage.
Both functionality and performance are great
> -Original Message-
> From: sandeep krishnamurthy
> Sent: Wednesday, July 10, 2019 7:03 AM
> To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org
nstalled from pre-built package.
> > > > --System Info--
> > > > Platform : Linux-4.15.0-1035-aws-x86_64-with-Ubuntu-18.04-bionic
> > > > system : Linux
> > > > node : ip-172-31-63-171
> > > > release : 4.15.0-1035-aws
>
Thanks to raise the issue and we will take a look ASAP.
The downstream cases is not in the MXNet CI so it's hard to catch the potential
bugs or performance degradation for MXNet developers.
In the future, I suggest adding the major downstream test cases, like from
sockeye, GluonNLP, GLuonCV,
+1 for this proposal. The operator fusion is a very common skill to improve the
efficient memory bandwidth and reduce the latency.
My suggestions:
* Flexibility
Because the fusion, especially pointwise fusion, is backend and device
independent.
It's better to make the solution more flexible
Thanks for the new proposal.
My concern for the current proposal is that the script/code will be NOT
portable and backward compatible, also increase the complexity for the usage,
with such backend specific info in the operator.
Let's say if the user set the backend parameter in their script ,
Congratulations, Darren :) Thanks for your great works in Horovod.
> -Original Message-
> From: Chaitanya Bapat [mailto:chai.ba...@gmail.com]
> Sent: Friday, May 24, 2019 9:46 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [Announcement] New Committer - Yuxi Hu
>
>
Thanks, Lai.
With the great helps from the community, all PRs listed in the roadmap are done
:)
https://github.com/apache/incubator-mxnet/issues/14619#issuecomment-480110642
Update the status of the below list
- [1] PR#14713 is almost done and wait for internal validation results
- [2]
Cong, Zhennan.
Really great works and it makes the MXNet/Quantization flow outstanding over
the world!
> -Original Message-
> From: Lv, Tao A [mailto:tao.a...@intel.com]
> Sent: Tuesday, April 30, 2019 11:01 PM
> To: dev@mxnet.incubator.apache.org
> Subject: RE: [Announcement] New
BTW, "maintainability, testability and readability" is always our design goal
from starting point of MKL-DNN integration :)
> -Original Message-
> From: Lv, Tao A [mailto:tao.a...@intel.com]
> Sent: Wednesday, April 10, 2019 11:03 AM
> To: dev@mxnet.incubator.apache.org
> Subject: RE:
Agree.
Recently, we (Tao, Shufan, Pengxin) are trying to integrate the Intel MKL math
functions into mshadow and MXNet.
We have to go through two repos and make lots of tradeoff between them.
If we can move mshadow into MXNet, it will be more flexible to redesign and
refactor parts of legacy
+1 single build system.
> -Original Message-
> From: Qing Lan [mailto:lanking...@live.com]
> Sent: Friday, April 5, 2019 5:27 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [MXNET 2.0 Wishlist] [DISCUSS] Single build system
>
> +1 to have a single build system
>
> Currently
etups+and+Hangou
> ts
>
> I also strongly agree with your 4). I think we should have a clear roadmap on
> our wiki page and/or github repo.
>
> Again, welcome on board!
>
> Lin
>
>
> On Sun, Mar 17, 2019 at 7:33 AM Zhao, Patric
> wrote:
>
> > V
Very great points!
+1 for 4) and 5)
> -Original Message-
> From: Zach Boldyga [mailto:z...@scalabull.com]
> Sent: Sunday, March 17, 2019 8:33 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: Call for Ideas and Approaches to Community Building
>
> This is a great discussion,
I am very glad to have this opportunity to contribute to the Apache/MXNet
community :)
Thanks all of the supports from the community and Intel.
BR,
--Patric
> -Original Message-
> From: MiraiWK WKCN [mailto:w...@live.cn]
> Sent: Friday, March 15, 2019 12:52 AM
> To:
Congratulation!
We have the cooperation with Kan before and he is easy to communicate and very
professional :)
It's really deserved!
> -Original Message-
> From: Lv, Tao A [mailto:tao.a...@intel.com]
> Sent: Tuesday, February 19, 2019 2:17 PM
> To: dev@mxnet.incubator.apache.org;
Update, the issue is fixed and the new patch release is out MKL-DNN 0.17.4.
Tao filed a PR to update the MKLDNN version in the release 1.4.X
https://github.com/apache/incubator-mxnet/pull/14141
Thanks all of your helps :)
--Patric
> -Original Message-
> From: Zhao,
Agree to track the 3rd party packages which make MXNet more prosperous :)
Before building the CI, I suggest to create the related labels, like sockeye,
gluonCV, gluonNLP, etc, in the GitHub and give the high priority for these
issues/PR.
So the issue/PR can be fixed quickly and these important
Hi Sheng,
Thanks to raise this important issues. Sorry for the lack of validation since
we don't have mac machine with earlier OS version in house.
I will contact with MKL-DNN team for the supports of earlier versions of OSX
but I'm a little afraid the fix needs some extra-time.
+1, Good idea.
It's not very easy to find out the related contents since lots of folders in
the website.
> -Original Message-
> From: Sheng Zha [mailto:zhash...@apache.org]
> Sent: Saturday, January 19, 2019 3:28 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Taxonomy on our cwiki
+1 for this great proposal.
MXNet will be more flexible and portable with this new feature :)
Thanks,
--Patric
> -Original Message-
> From: sandeep krishnamurthy [mailto:sandeep.krishn...@gmail.com]
> Sent: Thursday, January 17, 2019 8:47 AM
> To: dev@mxnet.incubator.apache.org
>
Dear all,
I am pleased to announce that the MKLDNN is the default CPU backend in the
master branch for the Linux platform now.
(note: the nightly build and release doesn't change)
Really thanks to the great supports and joint works from the community.
Feedbacks are highly appreciated :)
Congratulation, Da!
Really thanks for your great supports and looking forward the more cooperation
together :)
> -Original Message-
> From: Tianqi Chen [mailto:tqc...@apache.org]
> Sent: Tuesday, December 18, 2018 1:02 AM
> To: dev@mxnet.incubator.apache.org
> Subject: [Annoucement]
+1, thanks for the efforts, Alex.
> -Original Message-
> From: Alex Zai [mailto:aza...@gmail.com]
> Sent: Tuesday, December 11, 2018 8:00 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Include MKLDNN into default mxnet pip package
>
> Continuation from the following thread:
>
Hi Steffen,
I saw the draft of 1.4 release notes in here
(https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.0+Release+Notes).
Is this near the final version? I'd like to add some descriptions of new
quantization features enabled in 1.4.
Is it OK?
Thanks,
MKL-DNN v0.17.1 is released https://github.com/intel/mkl-dnn/tree/v0.17.1
I have submitted the PR to pin this release version.
Thanks,
--Patric
> -Original Message-
> From: Zhao, Patric [mailto:patric.z...@intel.com]
> Sent: Wednesday, November 28, 2018 8:07 PM
+1 for making MKL-DNN default in master branch first for broad testing :)
My suggestion is to make MKL-DNN default on the master branch
firstly after 1.4.0 releasing branch is cut off. That will help MKL-DNN backend
to be widely used and tested by MXNet users who are building MXNet from
Hi Anirudh,
The LSTM performance bug is fixed by MKL-DNN and PR in here
(https://github.com/apache/incubator-mxnet/pull/13417).
I am still working on MKL-DNN team to get a patch release for MXNet 1.4 in 1 or
2 days.
Will update the status soon.
Thanks everyone.
--Patric
> -Original
Congratulation, Tao.
> -Original Message-
> From: kellen sunderland [mailto:kellen.sunderl...@gmail.com]
> Sent: Tuesday, November 27, 2018 11:17 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: [Anouncement] New Committer: Tao Lv
>
> Welcome Tao!
>
> On Mon, Nov 26, 2018 at
gt; Subject: RE: MKLDNN performance in CI
>
> I think yes, except the cpp test.
>
> -Original Message-
> From: Zhao, Patric [mailto:patric.z...@intel.com]
> Sent: Friday, November 23, 2018 10:06 AM
> To: dev@mxnet.incubator.apache.org
> Subject: RE: MKLDNN
Good point, Tao!
Is this env enabled in all MKL-DNN CI?
> -Original Message-
> From: Lv, Tao A [mailto:tao.a...@intel.com]
> Sent: Friday, November 23, 2018 9:53 AM
> To: dev@mxnet.incubator.apache.org
> Subject: RE: MKLDNN performance in CI
>
> Thanks for bringing this up, Marco. It's
Happy Thanksgiving, everyone :)
Hi Marco,
Thanks to raising this question. We will look into the details for CI test
cases and Shufan will provide the 1:1 OP level performance data.
In general, the CI tests is not performant cases which covered the different
situations, even corner cases, for
Hi Kellen,
Thank you very much for your recognition for our works :)
This is a great joint work from the community (Wu Jun, Zheng Da, etc.) and
Intel team.
We are continuously improving the quantization flow now and more amazing
features will be ready soon.
Thanks,
--Patric
>
Thanks, Steffen. I think there is NO open issue to block the MKLDNN to GA now.
BTW, several quantization related PRs (#13297,#13260) are under the review and
I think it can be merged in this week.
Thanks,
--Patric
> -Original Message-
> From: Steffen Rochel
Hi Anton,
Thanks for looking into the MKL-DNN PR.
As my understanding of cwiki
(https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release),
these features will go into 1.4 rather than patch release of 1.3.1.
Feel free to correct me :)
Thanks,
--Patric
>
Thanks Alex for bringing up this proposal. As far as I know, applied to the
MKL-DNN backend, MXNet is the most performant framework on CPU side now.
Especially that the recent subgraph fusion feature boosts the performance a lot
again.
Thus, I think it’s worth to make it default and let more
Sent again and our PR are submitted (in 22 days ago).
Please help take a review in case you're interested in :)
https://github.com/apache/incubator-mxnet/pull/12530
Thanks the great suggestions from Jun, Da, Haibin and other committers.
BR,
--Patric
From: Zhao, Patric
Sent: Wednesday, August
o review the design doc and collect
> feedback.
> Are there still known issues or gaps before we declare MKL-DNN
> integration
> as GA?
>
> Regards,
> Steffen
>
> On Sat, Sep 29, 2018 at 1:31 AM Zhao, Patric
> wrote:
>
> > Than
; Regards,
> Steffen
>
> On Sat, Sep 29, 2018 at 1:31 AM Zhao, Patric wrote:
>
> > Thanks, Steffen.
> >
> > Regarding the next release note, two items from our side:
> >
> > 1. (-remove) MKL-DNN integration is done. I think we can remove this item.
>
Welcome, Jason, I think MXNet will achieve the great success as same as BigDL.
Looking forward to working with you :)
> -Original Message-
> From: Hen [mailto:bay...@apache.org]
> Sent: Friday, September 28, 2018 8:23 AM
> To: dev@mxnet.incubator.apache.org
> Cc: Jim Jagielski ; Michael
Hi Roshani,
Good notes :)
Several items about the performance and MKL-DNN in the below, please help take
a review.
@Da, Alex, if anything about MKL-DNN is missed, feel free to add.
*Performance improvement
+Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on CPU
Hi Leonard,
Thanks to raising the issue of topk op.
The root cause is from the current API design which used float data type to
represent the integer index, and as we know, the float type could NOT express
the large integer precisely.
(I have no offense. I know I missed some backgrounds and I
Hi MXNet owners and committers,
A new proposal is posted in the wiki for the graph optimization and
quantization approach.
https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN
Really thanks for the supports from Zheng Da and
Hi MXNet owner,
We (Intel engineers) have already wrote up several design proposals and
published into cwiki.
So, I really like this documents and it make things very clear.
https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Design+Proposal+Template
Furthermore, I suggest adding a
Hi Steffen,
It's really great to share the MXNet roadmap in the community.
So, we have a clear picture and can align with the strategies.
Regarding the Q3 plan, High quality support for MKL (incl. MKL-DNN), it's
highly match with our plan.
We will move the current solution into sub-graph and
egration Stable Release
>
> * If all existing RNN integration tests pass with MKL-DNN build, this should
> give
> enough confidence?
> * Also, I remember one of the community member saying "mxnet-mkl" pypi
> package is not compiled with MKLDNN. Not sure about this
Hi Alex,
Regarding RNN, the first version of MKL-DNN RNN API is available in the MKL-DNN
master branch.
We have integrated it in our local branch and you can try our code (still in
developments).
.
> Regards,
> Steffen
>
> On Thu, Jun 21, 2018 at 12:09 AM Zhao, Patric
> wrote:
>
> > Hi MXNET owner,
> >
> > Recently, we (Intel engineers) have implemented the fused RNN
> > operations
> > (LSTM/GRU/vRNN) for the CPU, including bidirectional,
Hi Steffen,
It's a good meetup and it's pity I missed to join.
Regarding the proposal of 1.3, we (Intel team) want to add two items into the
list.
Please help take a review:
1) fused RNN operators for CPU (GRU/LSTM/vRNN)
Lead contributor : Patric Zhao
+1 for providing the offline materials :)
> -Original Message-
> From: Hen [mailto:bay...@apache.org]
> Sent: Tuesday, April 17, 2018 12:09 PM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: MXNet meetup in Seattle April 24th
>
> Will there be a write up/presentation upload for those
at Hen mentioned?
> > > Also, I'm not familiar with the Apache blog and how you contribute.
> > > I don't see info about it on Confluence or elsewhere. Certainly
> > > sounds like something that needs some attention and to be part of
> > > regular communications.
FYI, China user can't access medium.com :(
> -Original Message-
> From: Anirudh Acharya [mailto:anirudhk...@gmail.com]
> Sent: Thursday, April 12, 2018 6:31 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: blog for MXNet
>
> There is already an AWS Evangelist, Julien Simon, who
@intel.com>; 'Rahul Huilgol' <rahulhuil...@gmail.com>; Ye, Jason Y
<jason.y...@intel.com>; Zhang, Rong A <rong.a.zh...@intel.com>; Zhao, Patric
<patric.z...@intel.com>
Subject: RE: Extend MXNET distributed training with MPI AllReduce
For our current POC:
b. Add
Hi MXNET owners/developers,
As you known, the AllReduce and Parameter Severs are two very popular
distributed training modes in DL.
Currently, MXNET only supports parameter server mode and is lack of AllReduce
mode. Other frameworks, like tensorflow, pytorch, caffe, etc, can work with
t; Best regards,
> Marco
>
> On Fri, Mar 16, 2018 at 7:45 AM, Zhao, Patric <patric.z...@intel.com>
> wrote:
>
> > MKL issues summary:
> >
> > Feel free to let me know if anything I missed.
> >
> > Totally, there’re 11 open issues in the github wi
Hi MXNET developers,
Since the MKL-DNN is integrated into MXNET master in the last month, we saw
there're some confusions about how to build the MKL-DNN and Intel MKL into
MXNET.
And several Github issues were opened and most of them have been fixed. But I
think we still need to define a clear
-Original Message-
> From: Zheng, Da [mailto:dzz...@amazon.com]
> Sent: Tuesday, March 13, 2018 2:06 AM
> To: Zhao, Patric <patric.z...@intel.com>; steffenroc...@gmail.com;
> dev@mxnet.incubator.apache.org
> Cc: marco.g.ab...@googlemail.com; Ye, Jason Y <jason.y...@intel.c
+source data to read clearly.
> -Original Message-
> From: Zhao, Patric [mailto:patric.z...@intel.com]
> Sent: Friday, March 16, 2018 8:29 AM
> To: 'dev@mxnet.incubator.apache.org' <dev@mxnet.incubator.apache.org>
> Cc: Huang, Jin1 <jin1.hu...@intel.com>; Da Z
age-
> From: Zhao, Patric
> Sent: Wednesday, March 14, 2018 9:33 PM
> To: dev@mxnet.incubator.apache.org
> Cc: Huang, Jin1 <jin1.hu...@intel.com>
> Subject: MKLDNN Build (pre: call for contributions to next MXNet release)
>
> Hi Pedro,
>
> Thanks for th
n the website.
>
> What's the performance increase of using MKL in osx when running in CPU
> mode?
>
> Pedro
>
> On Wed, Mar 14, 2018 at 1:54 PM, Zhao, Patric <patric.z...@intel.com>
> wrote:
>
> > My fault, typo for your name, Larroy :(
> >
> > >
My fault, typo for your name, Larroy :(
> -Original Message-
> From: Zhao, Patric [mailto:patric.z...@intel.com]
> Sent: Wednesday, March 14, 2018 8:40 PM
> To: dev@mxnet.incubator.apache.org; Huang, Jin1 <jin1.hu...@intel.com>
> Subject: RE: call for contri
incubator-
> mxnet/blob/master/CMakeLists.txt#L158
>
> https://github.com/apache/incubator-mxnet/issues/10072
>
> It wrongly assumes that you have MKL installed, MKLDNN needs to check if
> MKL is available before, or be disabled by default, also in non intel
> platforms.
>
>
subset)?
> - do you have performance measurements (or plan to measure) to include in
> release notes?
> - should we talk about the package at the Apr 24th meetup in Seattle?
>
> Steffen
>
> On Sat, Mar 10, 2018 at 4:40 AM Zhao, Patric <patric.z...@intel.com>
> wrote:
>
>
the next
> release.
> At the moment, I'd not be in favour of having MKLDNN being part of it.
>
> Best regards,
> Marco
>
> Zhao, Patric <patric.z...@intel.com> schrieb am Sa., 10. März 2018, 02:30:
>
> > Hi Steffen,
> >
> > We'd like the MKL-DNN bac
Hi Steffen,
We'd like the MKL-DNN backend can be included in the next release.
We (Intel engineers) and Zheng-Da (AWS engineer) can work on it.
Could you help add the item in the table?
Thanks,
--Patric
> -Original Message-
> From: Steffen Rochel [mailto:steffenroc...@gmail.com]
>
@Chris please add into group too.
> -Original Message-
> From: YiZhi Liu [mailto:liuyi...@apache.org]
> Sent: Friday, March 9, 2018 8:40 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: Assign JIRA issue
>
> Yes it is. Thanks!
>
> 2018-03-08 16:04 GMT-08:00 Chris Olivier
Thanks,
--Patric
m>; Jiang,
Wenting <wenting.ji...@intel.com>; Zhao, Patric <patric.z...@intel.com>
Subject: Re: Intel Plan for the contribution to MXNET
Hi Patric,
Thanks for the contribution. It’s great to see actions on developing INT8
inference for CPU! I have a few questions and hope to have your
Hi MXNET developers,
We are from Intel Software and Service Group (SSG) and working on the
performance optimization for MXNET on Intel Architecture (IA).
Let me do a simple introduction about our ongoing projects.
Any suggestions and comments are highly appreciated.
1) MKL-DNN
100 matches
Mail list logo