This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new ff10e1a Fix the installation doc for MKL-DNN backend (#12534)
ff10e1a is described below
commit ff10e1acce1956ed4ac7e53402bc9093b37b8d13
Author: Tao Lv <[email protected]>
AuthorDate: Mon Sep 17 21:15:24 2018 +0800
Fix the installation doc for MKL-DNN backend (#12534)
* fix build from souce doc for mkldnn backend
* fix doc
---
docs/install/build_from_source.md | 10 +++++-----
docs/install/ubuntu_setup.md | 10 +++++++++-
2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/docs/install/build_from_source.md
b/docs/install/build_from_source.md
index 6c0a4da..4f0235f 100644
--- a/docs/install/build_from_source.md
+++ b/docs/install/build_from_source.md
@@ -40,7 +40,7 @@ MXNet supports multiple mathematical backends for
computations on the CPU:
* [Apple Accelerate](https://developer.apple.com/documentation/accelerate)
* [ATLAS](http://math-atlas.sourceforge.net/)
* [MKL](https://software.intel.com/en-us/intel-mkl) (MKL, MKLML)
-* [MKLDNN](https://github.com/intel/mkl-dnn)
+* [MKL-DNN](https://github.com/intel/mkl-dnn)
* [OpenBLAS](http://www.openblas.net/)
Usage of these are covered in more detail in the [build
configurations](#build-configurations) section.
@@ -92,13 +92,13 @@ The following lists show this order by library and `cmake`
switch.
For desktop platforms (x86_64):
-1. MKLDNN (submodule) | `USE_MKLDNN`
+1. MKL-DNN (submodule) | `USE_MKLDNN`
2. MKL | `USE_MKL_IF_AVAILABLE`
3. MKLML (downloaded) | `USE_MKLML`
4. Apple Accelerate | `USE_APPLE_ACCELERATE_IF_AVAILABLE` | Mac only
5. OpenBLAS | `BLAS` | Options: Atlas, Open, MKL, Apple
-Note: If `USE_MKL_IF_AVAILABLE` is set to False then MKLML and MKLDNN will be
disabled as well for configuration
+Note: If `USE_MKL_IF_AVAILABLE` is set to False then MKLML and MKL-DNN will be
disabled as well for configuration
backwards compatibility.
For embedded platforms (all other and if cross compiled):
@@ -129,8 +129,8 @@ It has following flavors:
<!-- [Removed until #11148 is merged.] This is the most effective option
since it can be downloaded and installed automatically
by the cmake script (see cmake/DownloadMKLML.cmake).-->
-* MKLDNN is a separate open-source library, it can be used separately from MKL
or MKLML. It is
- shipped as a subrepo with MXNet source code (see 3rdparty/mkldnn or the
[mkl-dnn project](https://github.com/intel/mkl-dnn))
+* MKL-DNN is a separate open-source library, it can be used separately from
MKL or MKLML. It is
+ shipped as a subrepo with MXNet source code (see 3rdparty/mkldnn or the
[MKL-DNN project](https://github.com/intel/mkl-dnn))
Since the full MKL library is almost always faster than any other BLAS library
it's turned on by default,
however it needs to be downloaded and installed manually before doing `cmake`
configuration.
diff --git a/docs/install/ubuntu_setup.md b/docs/install/ubuntu_setup.md
index 07bf2cb..804887a 100644
--- a/docs/install/ubuntu_setup.md
+++ b/docs/install/ubuntu_setup.md
@@ -70,7 +70,7 @@ pip install mxnet-cu92mkl
Alternatively, you can use the table below to select the package that suits
your purpose.
-| MXNet Version | Basic | CUDA | MKL | CUDA/MKL |
+| MXNet Version | Basic | CUDA | MKL-DNN | CUDA/MKL-DNN |
|-|-|-|-|-|
| Latest | mxnet | mxnet-cu92 | mxnet-mkl | mxnet-cu92mkl |
@@ -166,6 +166,14 @@ If building on CPU and using OpenBLAS:
make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas
```
+If building on CPU and using MKL and MKL-DNN (make sure MKL is installed
according to [Math Library
Selection](build_from_source.html#math-library-selection) and [MKL-DNN
README](https://github.com/apache/incubator-mxnet/blob/master/MKLDNN_README.md)):
+
+```bash
+ git clone --recursive https://github.com/apache/incubator-mxnet.git
+ cd mxnet
+ make -j $(nproc) USE_OPENCV=1 USE_BLAS=mkl USE_MKLDNN=1
+```
+
If building on GPU and you want OpenCV and OpenBLAS (make sure you have
installed the [CUDA dependencies first](#cuda-dependencies)):
```bash