RE: [RFC] Support for creation of Large Tensors in MXNet

2019-05-25 Thread Lv, Tao A
/incubator-mxnet/pull/13723 https://github.com/dmlc/mshadow/pull/365 Feel free to let me know if anything I can help. Thanks, -tao -Original Message- From: Lin Yuan [mailto:apefor...@gmail.com] Sent: Saturday, May 25, 2019 1:36 AM To: dev@mxnet.incubator.apache.org; Lv, Tao A Cc: d

RE: [RFC] Support for creation of Large Tensors in MXNet

2019-05-18 Thread Lv, Tao A
with providing initial support for such large tensors so MXNet customers can start using it. Good to hear MKLDNN will provide support for such cases. Do you have a timeline as to when this feature will be released ? -Rohit On 4/29/19, 7:18 PM, "Lv, Tao A" wrote: Thank you Lin! I wo

RE: [DISCUSS] 1.5.0 Release Plan

2019-05-15 Thread Lv, Tao A
Hi dev, We see there are several github issues [1][2][3][4] about mxnet windows build experience. The team is working intensively [5][6][7] on that to fix some problems of MKL-DNN build on windows. We hope these fixes can catch the code freeze and finally enter the 1.5.0 release. The PR

RE: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-06 Thread Lv, Tao A
by recompiling cmake with ssl support (so basically just a problem on my end). After that MKL downloaded correctly, and everything compiled correctly with Anirudh's build flags. On Sun, May 5, 2019 at 6:59 PM Lv, Tao A wrote: > Hi Kellen, does the problem still exist for you? I just built mx

RE: [VOTE] Release Apache MXNet (incubating) version 1.4.1.rc0

2019-05-05 Thread Lv, Tao A
Hi Kellen, does the problem still exist for you? I just built mxnet 1.4.1rc0 + mkldnn from source with cmake on my centos machine and everything works well: -- Downloading MKLML... -- [download 0% complete] ... -- [download 100% complete] -- Setting MKLROOT path to

RE: [Announcement] New Committer - Zhennan Qin

2019-04-30 Thread Lv, Tao A
Congratulations Zhennan! -Original Message- From: Jun Wu [mailto:wujun@gmail.com] Sent: Tuesday, April 30, 2019 12:29 PM To: dev@mxnet.incubator.apache.org Subject: [Announcement] New Committer - Zhennan Qin Please join me in welcoming Zhennan Qin (https://github.com/ZhennanQin)

RE: [Announcement] New Committer - Hao Jin

2019-04-30 Thread Lv, Tao A
Congratulations Hao! -Original Message- From: Jun Wu [mailto:wujun@gmail.com] Sent: Tuesday, April 30, 2019 12:29 PM To: dev@mxnet.incubator.apache.org Subject: [Announcement] New Committer - Hao Jin Please join me in welcoming Hao Jin (https://github.com/haojin2) from AWS as a new

RE: Proposal for Conversion from FP32 to Mixed Precision Models

2019-04-30 Thread Lv, Tao A
t; > complicated and not very easy to use. I understand that this may >> > cause some confusion as people may try to use target_dtype of int8 >> > but I think its still better than causing user confusion with the API >> > usage. >> > >> > Also, when

RE: [RFC] Support for creation of Large Tensors in MXNet

2019-04-29 Thread Lv, Tao A
in this project that Rohit proposed. What is the plan in MKLDNN to support large tensors? We may want to coordinate the progress since many operators are using MKLDNN implementation in CPU now. Many Thanks, Lin On Sun, Apr 28, 2019 at 7:52 PM Lv, Tao A wrote: > Thank you for bringing this topic to

RE: Proposal for Conversion from FP32 to Mixed Precision Models

2019-04-29 Thread Lv, Tao A
sed for inference to disk. It will save both, the symbol with the amp_cast and amp_multicast operators and the params (which are casted if necessary). Anirudh On Mon, Apr 29, 2019 at 6:55 AM Lv, Tao A wrote: > Thank you for sharing this, Anirudh. > > Curious to know: > - what

RE: Proposal for Conversion from FP32 to Mixed Precision Models

2019-04-29 Thread Lv, Tao A
Thank you for sharing this, Anirudh. Curious to know: - what will be saved in a training checkpoint or snapshot? Can it be resumed on another platform which might not support the lower precision the previous one used? - what will be saved in the final symbol.json and params file when training

RE: [RFC] Support for creation of Large Tensors in MXNet

2019-04-28 Thread Lv, Tao A
Thank you for bringing this topic to dev, Rohit. Regarding large tensor, can you articulate: - what's the max size of dimensionality? Which data type is used to define dimensionality (ndims)? - what's the max size of each dimension? Which data type is used to define dimension size (shape[x])? -

RE: [MXNET 2.0 Wishlist] [DISCUSS] Refine the InferStorageType and memory planning pass

2019-04-10 Thread Lv, Tao A
wed as an enhanced version of > AlterOpLayout in the TVM relay Pass > > On Tue, Apr 9, 2019 at 8:03 PM Lv, Tao A wrote: > > > > > Thank you Tianqi and Sam for the kind suggestions. > > > > @Tianqi, > > > > Can you please point me to the code of this pass

RE: [MXNET 2.0 Wishlist] [DISCUSS] Refine the InferStorageType and memory planning pass

2019-04-09 Thread Lv, Tao A
separate optimization > pass rather than memory planning. As is done in the TVM stack. If we > want to do a clean slate solution, I would recommend looking into that > instead. > > TIanqi > > On Tue, Apr 9, 2019 at 1:46 AM Lv, Tao A wrote: > >> >>

[MXNET 2.0 Wishlist] [DISCUSS] Refine the InferStorageType and memory planning pass

2019-04-09 Thread Lv, Tao A
Hi dev, As we're discussing the roadmap for MXNet 2.0, I would like to start a thread about refining the InferStorageType and memory planning pass in MXNet and hope it can happen as a part of the 2.0 release. Thanks to @eric-haibin-lin, part of the proposal has already been discussed in

RE: Requesting slack access

2019-03-27 Thread Lv, Tao A
Invitation is sent to d...@paren.com. You can find the ‘mxnet’ channel in ASF. Welcome to Apache MXNet community! -tao From: Dom Kiva-Meyer [mailto:d...@paren.com] Sent: Thursday, March 28, 2019 9:07 AM To: d...@mxnet.apache.org Subject: Requesting slack access

RE: [Announcement] New Committer - Patric Zhao

2019-03-15 Thread Lv, Tao A
Wow, congratulation Patric! -Original Message- From: Steffen Rochel [mailto:steffenroc...@gmail.com] Sent: Friday, March 15, 2019 10:25 PM To: dev@mxnet.incubator.apache.org Cc: patric zhao Subject: Re: [Announcement] New Committer - Patric Zhao Congratulation Patrick! Steffen On Fri,

RE: [Announcement] New Committer - Kan Wu (@wkcn)

2019-02-18 Thread Lv, Tao A
Congratulations Kan! You're well deserved! -Original Message- From: Sheng Zha [mailto:szha@gmail.com] Sent: Tuesday, February 19, 2019 2:10 PM To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org Cc: Anirudh Subramanian ; Jackie Wu Subject: [Announcement] New Committer - Kan Wu

RE: [Announcement] New Committer -- Lin Yuan

2019-02-03 Thread Lv, Tao A
Congratulations Lin! -Original Message- From: kellen sunderland [mailto:kellen.sunderl...@gmail.com] Sent: Sunday, February 3, 2019 3:10 PM To: dev@mxnet.incubator.apache.org Subject: Re: [Announcement] New Committer -- Lin Yuan Congrats Lin! Well deserved. On Sat, Feb 2, 2019 at

RE: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc2

2019-01-31 Thread Lv, Tao A
+1. Verified below items: 1. Checkout code from tag 1.4.0rc2 and build mkldnn backend successfully on both cpu and gpu w/ mkl and openblas 2. ResNet50v1 FP32 performance looks good for both latency and throughput 3. Quantization script works well with ResNet50v1 4. ResNet50v1 INT8 model

RE: Apache MXNet v1.4.0 release status

2019-01-21 Thread Lv, Tao A
> > > > > > >> > > > > > On Tue, Jan 8, 2019 at 11:28 AM Qing Lan > >> > > > > > > > >> > > wrote: > >> > > > > > > >> > > > >

Cherry pick bug fix from master branch to v1.4.x

2019-01-15 Thread Lv, Tao A
Hi community, As 1.4.0 release is still in process, I would like to propose to cherry pick https://github.com/apache/incubator-mxnet/pull/13843 into the v1.4.x branch. It fixed a crash issue of quantized SSD example on master branch which was reported by MXNet user. This issue also exists on

RE: [DISCUSS] Make MKLDNN as a default on Maven nightly build

2019-01-14 Thread Lv, Tao A
have 'mxnet' package without MKL-DNN backend and 'mxnet-mkl' with MKL-DNN backend. -tao -Original Message- From: Lv, Tao A [mailto:tao.a...@intel.com] Sent: Tuesday, January 15, 2019 8:14 AM To: dev@mxnet.incubator.apache.org Subject: Re: [DISCUSS] Make MKLDNN as a default on Maven nightly

Re: [DISCUSS] Make MKLDNN as a default on Maven nightly build

2019-01-14 Thread Lv, Tao A
MKLML has already been released with mxnet-mkl for 4 versions: 1.2.0, 1.2.1, 1.3.0, 1.3.1. > On Jan 15, 2019, at 3:43 AM, Zach Kimberg wrote: > > There should not be a problem including MKLML since the above (BSD > 3-clause) is Category A under apache ( >

RE: [ANNOUNCE] MKLDNN becomes the default CPU backend in Apache/MXNet master branch

2019-01-12 Thread Lv, Tao A
Thanks for the great collaboration through the community to make things happen. :) -Original Message- From: Jun Wu [mailto:wujun@gmail.com] Sent: Saturday, January 12, 2019 12:54 PM To: dev@mxnet.incubator.apache.org Cc: u...@mxnet.apache.org Subject: Re: [ANNOUNCE] MKLDNN becomes

RE: Apache MXNet v1.4.0 release status

2019-01-07 Thread Lv, Tao A
What should I do for the double headers in 3rdparty/mkldnn/src/cpu/xbyak/? -tao -Original Message- From: Steffen Rochel [mailto:steffenroc...@gmail.com] Sent: Tuesday, January 8, 2019 10:51 AM To: dev@mxnet.incubator.apache.org Subject: Re: Apache MXNet v1.4.0 release status Kellen and

RE: joining slack channel

2019-01-04 Thread Lv, Tao A
Invitation is sent to mue...@amazon.com. You can find mxnet in ASF channels. -Original Message- From: Muenz, Edison [mailto:mue...@amazon.com.INVALID] Sent: Friday, January 4, 2019 11:46 PM To: dev@mxnet.incubator.apache.org Subject: Re: joining slack channel Hi, can I get an invite to

RE: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc0

2018-12-26 Thread Lv, Tao A
+1, verified below items: 1. clone and checkout 1.4.0.rc0 code, build from source with USE_MKLDNN=1 USE_BLAS=mkl 2. run example/image-classification/benchmark_score.py on CPU. The script works well and performance looks good. 3. run models in example/quantization/ on CPU. The scripts work well

RE: [Annoucement] New Committer -- Da Zheng

2018-12-17 Thread Lv, Tao A
Congrats Da! Thank you for the effort on bringing MKL-DNN to MXNet. It's really the footstone for the latter work and improvements. -Original Message- From: Tianqi Chen [mailto:tqc...@apache.org] Sent: Tuesday, December 18, 2018 1:02 AM To: dev@mxnet.incubator.apache.org Subject:

RE: Cambricon MLU support for MXNet.

2018-12-17 Thread Lv, Tao A
"mshadow is being deprecated." Surprised to know that. Was it discussed before? Do we have any document to tell contributors and developers about that? -tao -Original Message- From: Chris Olivier [mailto:cjolivie...@gmail.com] Sent: Monday, December 17, 2018 3:10 PM To:

RE: v1.4.0 status 11/29

2018-11-29 Thread Lv, Tao A
Tao - thanks for fixing the crash. Please create PR on v1.4.x branch with [v1.4.x] in title and add me to the PR. Steffen On Thu, Nov 29, 2018 at 8:44 PM Lv, Tao A wrote: > Hi Steffen, I would like to have > https://github.com/apache/incubator-mxnet/pull/13433 into the coming > 1.4.

RE: v1.4.0 status 11/29

2018-11-29 Thread Lv, Tao A
Hi Steffen, I would like to have https://github.com/apache/incubator-mxnet/pull/13433 into the coming 1.4.0 release. It fixed a crash of deconvolution with certain input size for MKL-DNN backend. This PR is well reviewed and already merged into the master branch. New test case is also

RE: Include MKLDNN into default mxnet pip package

2018-11-28 Thread Lv, Tao A
we should move to releases before we make it a default (Naveen) There was a discussion about MKLDNN version used by MXNet, and would be great if it can be summarized. Hagay On Tue, Nov 27, 2018 at 6:21 PM Lv, Tao A wrote: > Hi Anirudh, please find the statements from MK

RE: Include MKLDNN into default mxnet pip package

2018-11-27 Thread Lv, Tao A
dont know if this is going to happen anytime soon (It would be nice if you can obtain some timeline from MKLDNN team on this). As long as the PIP still has two different packages for mkl and without mkl my vote is +1 for adding it as a default. Anirudh On Tue, Nov 27, 2018 at 5:04 AM Lv, Tao A wr

RE: [Anouncement] New Committer: Tao Lv

2018-11-27 Thread Lv, Tao A
And Steffen. :) -Original Message- From: Lv, Tao A [mailto:tao.a...@intel.com] Sent: Tuesday, November 27, 2018 9:05 PM To: dev@mxnet.incubator.apache.org Subject: RE: [Anouncement] New Committer: Tao Lv Thank you for the warm welcome, Kellen and Marco :) -Original Message

RE: [Anouncement] New Committer: Tao Lv

2018-11-27 Thread Lv, Tao A
Thank you for the warm welcome, Kellen and Marco :) -Original Message- From: Marco de Abreu [mailto:marco.g.ab...@googlemail.com.INVALID] Sent: Tuesday, November 27, 2018 4:26 PM To: dev@mxnet.incubator.apache.org Subject: Re: [Anouncement] New Committer: Tao Lv Thank you and welcome!

RE: Include MKLDNN into default mxnet pip package

2018-11-27 Thread Lv, Tao A
for it ? Anirudh On Sun, Nov 25, 2018 at 5:03 PM Lv, Tao A wrote: > Hi Steffen, > > I think all the commits on MKL-DNN master branch are well tested for > MKL-DNN development team. If we really want to have a release commit > in the coming 1.4 mxnet release, my suggestion is 0.17

RE: [Anouncement] New Committer: Tao Lv

2018-11-26 Thread Lv, Tao A
Really thank you for the kind invitation and welcome. I'm thrilled to be a committer of this great project. Looking forward to more contributions and making MXNet better. -tao -Original Message- From: Sheng Zha [mailto:szha@gmail.com] Sent: Tuesday, November 27, 2018 11:13 AM To:

Re: Include MKLDNN into default mxnet pip package

2018-11-25 Thread Lv, Tao A
eeding edge snapshots. > However, speed of development is important as well. > As a compromise for 1.4.0 release with MKL-DNN: can the MKL-DNN development > team provide us with a well tested tag/commit id to include in 1.4.0 > release? > Steffen > >> On Wed, Nov 21, 201

Re: MKLDNN performance in CI

2018-11-23 Thread Lv, Tao A
arco >> >> On Fri, Nov 23, 2018 at 3:38 AM Zhao, Patric >> wrote: >> >>> Thanks, it should be the most time-consuming parts. >>> >>> @Marco, could you try to disable this env and see the performance again? >>> >>>> -O

RE: MKLDNN performance in CI

2018-11-22 Thread Lv, Tao A
? > -Original Message- > From: Lv, Tao A [mailto:tao.a...@intel.com] > Sent: Friday, November 23, 2018 9:53 AM > To: dev@mxnet.incubator.apache.org > Subject: RE: MKLDNN performance in CI > > Thanks for bringing this up, Marco. It's really weird since most of > those tests

RE: MKLDNN performance in CI

2018-11-22 Thread Lv, Tao A
Thanks for bringing this up, Marco. It's really weird since most of those tests listed in "worth noting" are not related to mkldnn backend. I can understand that some tests for mkldnn operator may be slower because MXNET_MKLDNN_DEBUG is enabled in the CI:

RE: [Discussion] Remove bundled llvm OpenMP

2018-11-22 Thread Lv, Tao A
Thanks for the great summary, Anton. I'm curious that is there anybody builds mxnet successfully with ICC/ICPC? -Original Message- From: Anton Chernov [mailto:mecher...@gmail.com] Sent: Thursday, November 22, 2018 8:36 PM To: d...@mxnet.apache.org Subject: [Discussion] Remove bundled

RE: Include MKLDNN into default mxnet pip package

2018-11-21 Thread Lv, Tao A
that especially if we make MKLDNN the default. > > Good to know it is known already as a regression.Alex has created this > issue https://github.com/apache/incubator-mxnet/issues/13369, please > add details and link the corresponding issue in MKLDNN(I couldn't find). > > -Navee

RE: Include MKLDNN into default mxnet pip package

2018-11-21 Thread Lv, Tao A
Here are my answers for the questions from Kellen and Naveen about MKL-DNN. It doesn't mean that I'm supportive for making MKL-DNN default here. @Kellen, FYI, here is a list for those platforms which are officially supported by MKL-DNN. https://github.com/intel/mkl-dnn#system-requirements

RE: Hold on the merge of new MKL-DNN operator tests

2018-11-05 Thread Lv, Tao A
es, that's exactly the reason we have the CI setup. Let us know > if there's anything you think we can help with. > > On Fri, Nov 2, 2018 at 7:56 PM Lv, Tao A wrote: > > > > > Hi MXNet developers, > > > > I am working on PR#12953< > > https://github.co

Hold on the merge of new MKL-DNN operator tests

2018-11-02 Thread Lv, Tao A
Hi MXNet developers, I am working on PR#12953 to update the version of MKL-DNN dependency. This PR will help to address several requests and issues from the commnunity of both MXNet and MKL-DNN. It will also improve MXNet performance a lot

RE: Remove MKLML as dependency

2018-09-20 Thread Lv, Tao A
ded. On Thu, Sep 20, 2018 at 7:41 AM Lv, Tao A wrote: > Hah, seems it's a little confusing here. I think the "Intel MKL" in > the first statement includes both the full MKL and MKLML library. And > the "dynamic library" there obviously means the MKLML which is &

RE: Remove MKLML as dependency

2018-09-20 Thread Lv, Tao A
the spirit of simplifying things...limiting the deps... On Sep 20, 2018 07:41, "Lv, Tao A" wrote: Hah, seems it's a little confusing here. I think the "Intel MKL" in the first statement includes both the full MKL and MKLML library. And the "dynamic library" there obviously m

RE: Remove MKLML as dependency

2018-09-20 Thread Lv, Tao A
On Wed, Sep 19, 2018 at 11:49 PM Lv, Tao A wrote: > Hi Chris, please kindly check the statements here: > https://github.com/intel/mkl-dnn#installation > > " Intel MKL-DNN can take advantage of optimized matrix-matrix > multiplication (GEMM) function from Intel MKL. The dy

RE: Remove MKLML as dependency

2018-09-20 Thread Lv, Tao A
On Tue, Sep 18, 2018 at 11:31 PM Lv, Tao A wrote: > If you just want to test the performance, I think you need link MKL > for BLAS and MKL-DNN for NN. Also MKL-DNN should link MKL for better > performance. > > Here are some ways for you to install full MKL library if you don't >

RE: Remove MKLML as dependency

2018-09-19 Thread Lv, Tao A
is licensed? Best, Alex On 9/18/18, 7:50 PM, "Lv, Tao A" wrote: Hi Alex, Thanks for bringing this up. The original intention of MKLML is to provide a light and easy-to-access library for ML/DL community. It's released with MKL-DNN under Apache-2.0 license. AFAI

RE: Remove MKLML as dependency

2018-09-18 Thread Lv, Tao A
Hi Alex, Thanks for bringing this up. The original intention of MKLML is to provide a light and easy-to-access library for ML/DL community. It's released with MKL-DNN under Apache-2.0 license. AFAIK, MKL-DNN still relies on it for better performance. So I'm afraid there will be a performance

RE: Propose to discontinue supporting Apache MXNet on Windows 7

2018-08-30 Thread Lv, Tao A
If we really want to discontinue the support on Win7, maybe it should happen in a major release like 2.0.0. -Original Message- From: dev-return-4003-tao.a.lv=intel@mxnet.incubator.apache.org [mailto:dev-return-4003-tao.a.lv=intel@mxnet.incubator.apache.org] Sent: Thursday,

RE: Update on 1.2.1 release

2018-06-13 Thread Lv, Tao A
Yes, #10311 is only in master branch, so I guess it won't impact 1.2.0 branch and block the release of 1.2.1, right? A PR (#11273) is submitted to disable the test temporally and hopefully it will be fixed soon. -tao -Original Message- From: Marco de Abreu

RE: A proposal for unified integration with external acceleration libraries

2018-06-03 Thread Lv, Tao A
Hi Da and other developers, It's a great idea to limit external acceleration libs into certain scope and subgraph. I am not quite familiar with TVM and TensorRT's design. But from the side of MKL-DNN backend, here are my concerns on this proposal: 1. Is subgraph for all third party

RE: MXNet Protobuf dependency

2018-05-23 Thread Lv, Tao A
Yes. https://github.com/dmlc/ps-lite/pull/137 -Original Message- From: Hen [mailto:bay...@apache.org] Sent: Thursday, May 24, 2018 11:47 AM To: dev@mxnet.incubator.apache.org Subject: Re: MXNet Protobuf dependency Have they opened a PR with the ps-lite project? On Wed, May 23, 2018 at

Proposal for fused RNN operator on CPU

2018-04-25 Thread Lv, Tao A
Hi developers, As you may know, currently there is no fused RNN operator for CPU in mxnet and that prevents users from migrating or deploying their models on CPU, if their models are built with mxnet fused RNN cell APIs. This feature disparity also makes it hard to maintain mxnet code and