This is an automated email from the ASF dual-hosted git repository.

samskalicky pushed a commit to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.x by this push:
     new 0bc01e9  [v1.x] Static build for mxnet-cu110 (#19272)
0bc01e9 is described below

commit 0bc01e9f85006d9a7c4adf79a514e059499ae1ce
Author: waytrue17 <[email protected]>
AuthorDate: Tue Oct 20 16:59:02 2020 -0700

    [v1.x] Static build for mxnet-cu110 (#19272)
    
    * static build with cuda 11.0
    
    * add new line at end of files, add set -e
    
    * update CD
    
    * update LIBCUDA_VERSION
    
    * update cudnn version
    
    Co-authored-by: Wei Chu <[email protected]>
---
 cd/Jenkinsfile_cd_pipeline                         |   2 +-
 cd/Jenkinsfile_release_job                         |   2 +-
 cd/README.md                                       |   1 +
 cd/utils/artifact_repository.md                    |   2 +-
 cd/utils/mxnet_base_image.sh                       |   3 +
 config/distribution/linux_cu110.cmake              |  34 ++++
 make/staticbuild/linux_cu110.mk                    | 173 +++++++++++++++++++++
 tools/pip/doc/CPU_ADDITIONAL.md                    |   1 +
 tools/pip/doc/CU100_ADDITIONAL.md                  |   1 +
 tools/pip/doc/CU101_ADDITIONAL.md                  |   1 +
 tools/pip/doc/CU102_ADDITIONAL.md                  |   1 +
 .../{CU102_ADDITIONAL.md => CU110_ADDITIONAL.md}   |   3 +-
 tools/pip/doc/CU92_ADDITIONAL.md                   |   1 +
 tools/pip/doc/NATIVE_ADDITIONAL.md                 |   1 +
 tools/pip/setup.py                                 |   5 +-
 tools/setup_gpu_build_tools.sh                     |  61 ++++++--
 tools/staticbuild/README.md                        |   2 +-
 17 files changed, 278 insertions(+), 16 deletions(-)

diff --git a/cd/Jenkinsfile_cd_pipeline b/cd/Jenkinsfile_cd_pipeline
index 5ed3e6d..a032225 100644
--- a/cd/Jenkinsfile_cd_pipeline
+++ b/cd/Jenkinsfile_cd_pipeline
@@ -36,7 +36,7 @@ pipeline {
 
   parameters {
     // Release parameters
-    string(defaultValue: "cpu,native,cu90,cu92,cu100,cu101,cu102", 
description: "Comma separated list of variants", name: "MXNET_VARIANTS")
+    string(defaultValue: "cpu,native,cu90,cu92,cu100,cu101,cu102,cu110", 
description: "Comma separated list of variants", name: "MXNET_VARIANTS")
     booleanParam(defaultValue: false, description: 'Whether this is a release 
build or not', name: "RELEASE_BUILD")
   }
 
diff --git a/cd/Jenkinsfile_release_job b/cd/Jenkinsfile_release_job
index c8b4918..502928a 100644
--- a/cd/Jenkinsfile_release_job
+++ b/cd/Jenkinsfile_release_job
@@ -43,7 +43,7 @@ pipeline {
     // any disruption caused by different COMMIT_ID values chaning the job 
parameter configuration on
     // Jenkins.
     string(defaultValue: "mxnet_lib/static", description: "Pipeline to build", 
name: "RELEASE_JOB_TYPE")
-    string(defaultValue: "cpu,native,cu90,cu92,cu100,cu101,cu102", 
description: "Comma separated list of variants", name: "MXNET_VARIANTS")
+    string(defaultValue: "cpu,native,cu90,cu92,cu100,cu101,cu102,cu110", 
description: "Comma separated list of variants", name: "MXNET_VARIANTS")
     booleanParam(defaultValue: false, description: 'Whether this is a release 
build or not', name: "RELEASE_BUILD")
   }
 
diff --git a/cd/README.md b/cd/README.md
index 266455c..282016c 100644
--- a/cd/README.md
+++ b/cd/README.md
@@ -36,6 +36,7 @@ Currently, below variants are supported. All of these 
variants except native hav
 * *cu100*: CUDA 10
 * *cu101*: CUDA 10.1
 * *cu102*: CUDA 10.2
+* *cu110*: CUDA 11.0
 
 *For more on variants, see 
[here](https://github.com/apache/incubator-mxnet/issues/8671)*
 
diff --git a/cd/utils/artifact_repository.md b/cd/utils/artifact_repository.md
index 12b4a27..49399bb 100644
--- a/cd/utils/artifact_repository.md
+++ b/cd/utils/artifact_repository.md
@@ -53,7 +53,7 @@ If not set, derived through the value of sys.platform 
(https://docs.python.org/3
 
 **Variant**
 
-Manually configured through the --variant argument. The current variants are: 
cpu, native, cu80, cu90, cu92, cu100, and cu101.
+Manually configured through the --variant argument. The current variants are: 
cpu, native, cu92, cu100, cu101, cu102 and cu110.
 
 As long as the tool is being run from the MXNet code base, the runtime feature 
detection tool 
(https://github.com/larroy/mxnet/blob/dd432b7f241c9da2c96bcb877c2dc84e6a1f74d4/docs/api/python/libinfo/libinfo.md)
 can be used to detect whether the library has been compiled with MKL (library 
has MKL-DNN feature enabled) and/or CUDA support (compiled with CUDA feature 
enabled).
 
diff --git a/cd/utils/mxnet_base_image.sh b/cd/utils/mxnet_base_image.sh
index c87db66..0e1ecc8 100755
--- a/cd/utils/mxnet_base_image.sh
+++ b/cd/utils/mxnet_base_image.sh
@@ -39,6 +39,9 @@ case ${mxnet_variant} in
     cu102*)
     echo "nvidia/cuda:10.2-cudnn7-runtime-ubuntu16.04"
     ;;
+    cu110*)
+    echo "nvidia/cuda:11.0-cudnn8-runtime-ubuntu16.04"
+    ;;
     cpu)
     echo "ubuntu:16.04"
     ;;
diff --git a/config/distribution/linux_cu110.cmake 
b/config/distribution/linux_cu110.cmake
new file mode 100644
index 0000000..7d44a99
--- /dev/null
+++ b/config/distribution/linux_cu110.cmake
@@ -0,0 +1,34 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+set(CMAKE_BUILD_TYPE "Distribution" CACHE STRING "Build type")
+set(CFLAGS "-mno-avx" CACHE STRING "CFLAGS")
+set(CXXFLAGS "-mno-avx" CACHE STRING "CXXFLAGS")
+
+set(USE_CUDA ON CACHE BOOL "Build with CUDA support")
+set(USE_CUDNN ON CACHE BOOL "Build with CUDA support")
+set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
+set(USE_OPENMP ON CACHE BOOL "Build with Openmp support")
+set(USE_MKL_IF_AVAILABLE OFF CACHE BOOL "Use Intel MKL if found")
+set(USE_MKLDNN ON CACHE BOOL "Build with MKL-DNN support")
+set(USE_LAPACK ON CACHE BOOL "Build with lapack support")
+set(USE_TVM_OP OFF CACHE BOOL "Enable use of TVM operator build system.")
+set(USE_SSE ON CACHE BOOL "Build with x86 SSE instruction support")
+set(USE_F16C OFF CACHE BOOL "Build with x86 F16C instruction support")
+
+set(CUDACXX "/usr/local/cuda-11.0/bin/nvcc" CACHE STRING "Cuda compiler")
+set(MXNET_CUDA_ARCH "5.0;6.0;7.0;8.0" CACHE STRING "Cuda architectures")
diff --git a/make/staticbuild/linux_cu110.mk b/make/staticbuild/linux_cu110.mk
new file mode 100644
index 0000000..fda9130
--- /dev/null
+++ b/make/staticbuild/linux_cu110.mk
@@ -0,0 +1,173 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------------------------
+#  Template configuration for compiling mxnet for making python wheel
+#-------------------------------------------------------------------------------
+
+#---------------------
+# choice of compiler
+#--------------------
+
+export CC = gcc
+export CXX = g++
+export NVCC = nvcc
+
+# whether compile with options for MXNet developer
+DEV = 0
+
+# whether compile with debug
+DEBUG = 0
+
+# whether to turn on signal handler (e.g. segfault logger)
+USE_SIGNAL_HANDLER = 1
+
+# the additional link flags you want to add
+ADD_LDFLAGS += -L$(DEPS_PATH)/lib $(DEPS_PATH)/lib/libculibos.a -lpng -ltiff 
-ljpeg -lz -ldl -lgfortran 
-Wl,--version-script=$(CURDIR)/make/config/libmxnet.ver,-rpath,'$${ORIGIN}',--gc-sections
+
+# the additional compile flags you want to add
+ADD_CFLAGS += -I$(DEPS_PATH)/include -ffunction-sections -fdata-sections
+
+#---------------------------------------------
+# matrix computation libraries for CPU/GPU
+#---------------------------------------------
+
+# choose the version of blas you want to use
+# can be: mkl, blas, atlas, openblas
+# in default use atlas for linux while apple for osx
+USE_BLAS=openblas
+
+# whether use opencv during compilation
+# you can disable it, however, you will not able to use
+# imbin iterator
+USE_OPENCV = 1
+# Add OpenCV include path, in which the directory `opencv2` exists
+USE_OPENCV_INC_PATH = NONE
+# Add OpenCV shared library path, in which the shared library exists
+USE_OPENCV_LIB_PATH = NONE
+
+# whether use CUDA during compile
+USE_CUDA = 1
+
+# add the path to CUDA library to link and compile flag
+# if you have already add them to environment variable, leave it as NONE
+# USE_CUDA_PATH = /usr/local/cuda
+USE_CUDA_PATH = $(DEPS_PATH)/usr/local/cuda-11.0
+
+# whether to use CuDNN library
+USE_CUDNN = 1
+
+# whether to use NCCL library
+USE_NCCL = 1
+
+# CUDA architecture setting: going with all of them.
+# For CUDA < 6.0, comment the *_50 lines for compatibility.
+# CUDA_ARCH :=
+
+# whether use cuda runtime compiling for writing kernels in native language 
(i.e. Python)
+ENABLE_CUDA_RTC = 1
+
+USE_NVTX=1
+
+# use openmp for parallelization
+USE_OPENMP = 1
+USE_OPERATOR_TUNING = 1
+USE_LIBJPEG_TURBO = 1
+
+# whether use MKL-DNN library
+USE_MKLDNN = 1
+
+# whether use NNPACK library
+USE_NNPACK = 0
+
+# whether use lapack during compilation
+# only effective when compiled with blas versions openblas/apple/atlas/mkl
+USE_LAPACK = 1
+
+# path to lapack library in case of a non-standard installation
+USE_LAPACK_PATH = $(DEPS_PATH)/lib
+
+# add path to intel library, you may need it for MKL, if you did not add the 
path
+# to environment variable
+USE_INTEL_PATH = NONE
+
+# If use MKL, choose static link automatically to allow python wrapper
+ifeq ($(USE_BLAS), mkl)
+USE_STATIC_MKL = 1
+else
+USE_STATIC_MKL = NONE
+endif
+
+#----------------------------
+# Settings for power and arm arch
+#----------------------------
+ARCH := $(shell uname -a)
+ifneq (,$(filter $(ARCH), armv6l armv7l powerpc64le ppc64le aarch64))
+       USE_SSE=0
+else
+       USE_SSE=1
+endif
+
+#----------------------------
+# distributed computing
+#----------------------------
+
+# whether or not to enable multi-machine supporting
+USE_DIST_KVSTORE = 1
+
+# whether or not allow to read and write HDFS directly. If yes, then hadoop is
+# required
+USE_HDFS = 0
+
+# path to libjvm.so. required if USE_HDFS=1
+LIBJVM=$(JAVA_HOME)/jre/lib/amd64/server
+
+# whether or not allow to read and write AWS S3 directly. If yes, then
+# libcurl4-openssl-dev is required, it can be installed on Ubuntu by
+# sudo apt-get install -y libcurl4-openssl-dev
+USE_S3 = 1
+
+#----------------------------
+# additional operators
+#----------------------------
+
+# path to folders containing projects specific operators that you don't want 
to put in src/operators
+EXTRA_OPERATORS =
+
+
+#----------------------------
+# plugins
+#----------------------------
+
+# whether to use caffe integration. This requires installing caffe.
+# You also need to add CAFFE_PATH/build/lib to your LD_LIBRARY_PATH
+# CAFFE_PATH = $(HOME)/caffe
+# MXNET_PLUGINS += plugin/caffe/caffe.mk
+
+# whether to use torch integration. This requires installing torch.
+# You also need to add TORCH_PATH/install/lib to your LD_LIBRARY_PATH
+# TORCH_PATH = $(HOME)/torch
+# MXNET_PLUGINS += plugin/torch/torch.mk
+
+# WARPCTC_PATH = $(HOME)/warp-ctc
+# MXNET_PLUGINS += plugin/warpctc/warpctc.mk
+
+# whether to use sframe integration. This requires build sframe
+# [email protected]:dato-code/SFrame.git
+# SFRAME_PATH = $(HOME)/SFrame
+# MXNET_PLUGINS += plugin/sframe/plugin.mk
+
diff --git a/tools/pip/doc/CPU_ADDITIONAL.md b/tools/pip/doc/CPU_ADDITIONAL.md
index 035ef4c..090186e 100644
--- a/tools/pip/doc/CPU_ADDITIONAL.md
+++ b/tools/pip/doc/CPU_ADDITIONAL.md
@@ -18,6 +18,7 @@
 Prerequisites
 -------------
 This package supports Linux, Mac OSX, and Windows platforms. You may also want 
to check:
+- [mxnet-cu110](https://pypi.python.org/pypi/mxnet-cu110/) with CUDA-11.0 
support.
 - [mxnet-cu102](https://pypi.python.org/pypi/mxnet-cu102/) with CUDA-10.2 
support.
 - [mxnet-cu101](https://pypi.python.org/pypi/mxnet-cu101/) with CUDA-10.1 
support.
 - [mxnet-cu100](https://pypi.python.org/pypi/mxnet-cu100/) with CUDA-10.0 
support.
diff --git a/tools/pip/doc/CU100_ADDITIONAL.md 
b/tools/pip/doc/CU100_ADDITIONAL.md
index 439d900..c7638e8 100644
--- a/tools/pip/doc/CU100_ADDITIONAL.md
+++ b/tools/pip/doc/CU100_ADDITIONAL.md
@@ -18,6 +18,7 @@
 Prerequisites
 -------------
 This package supports Linux and Windows platforms. You may also want to check:
+- [mxnet-cu110](https://pypi.python.org/pypi/mxnet-cu110/) with CUDA-11.0 
support.
 - [mxnet-cu102](https://pypi.python.org/pypi/mxnet-cu102/) with CUDA-10.2 
support.
 - [mxnet-cu101](https://pypi.python.org/pypi/mxnet-cu101/) with CUDA-10.1 
support.
 - [mxnet-cu92](https://pypi.python.org/pypi/mxnet-cu92/) with CUDA-9.2 support.
diff --git a/tools/pip/doc/CU101_ADDITIONAL.md 
b/tools/pip/doc/CU101_ADDITIONAL.md
index c983c73..44ebb89 100644
--- a/tools/pip/doc/CU101_ADDITIONAL.md
+++ b/tools/pip/doc/CU101_ADDITIONAL.md
@@ -18,6 +18,7 @@
 Prerequisites
 -------------
 This package supports Linux and Windows platforms. You may also want to check:
+- [mxnet-cu110](https://pypi.python.org/pypi/mxnet-cu110/) with CUDA-11.0 
support.
 - [mxnet-cu102](https://pypi.python.org/pypi/mxnet-cu102/) with CUDA-10.2 
support.
 - [mxnet-cu100](https://pypi.python.org/pypi/mxnet-cu101/) with CUDA-10.0 
support.
 - [mxnet-cu92](https://pypi.python.org/pypi/mxnet-cu92/) with CUDA-9.2 support.
diff --git a/tools/pip/doc/CU102_ADDITIONAL.md 
b/tools/pip/doc/CU102_ADDITIONAL.md
index 8417db1..4a81de8 100644
--- a/tools/pip/doc/CU102_ADDITIONAL.md
+++ b/tools/pip/doc/CU102_ADDITIONAL.md
@@ -18,6 +18,7 @@
 Prerequisites
 -------------
 This package supports Linux and Windows platforms. You may also want to check:
+- [mxnet-cu110](https://pypi.python.org/pypi/mxnet-cu110/) with CUDA-11.0 
support.
 - [mxnet-cu101](https://pypi.python.org/pypi/mxnet-cu101/) with CUDA-10.1 
support.
 - [mxnet-cu100](https://pypi.python.org/pypi/mxnet-cu100/) with CUDA-10.0 
support.
 - [mxnet-cu92](https://pypi.python.org/pypi/mxnet-cu92/) with CUDA-9.2 support.
diff --git a/tools/pip/doc/CU102_ADDITIONAL.md 
b/tools/pip/doc/CU110_ADDITIONAL.md
similarity index 95%
copy from tools/pip/doc/CU102_ADDITIONAL.md
copy to tools/pip/doc/CU110_ADDITIONAL.md
index 8417db1..8eaa7b2 100644
--- a/tools/pip/doc/CU102_ADDITIONAL.md
+++ b/tools/pip/doc/CU110_ADDITIONAL.md
@@ -18,6 +18,7 @@
 Prerequisites
 -------------
 This package supports Linux and Windows platforms. You may also want to check:
+- [mxnet-cu102](https://pypi.python.org/pypi/mxnet-cu102/) with CUDA-10.2 
support.
 - [mxnet-cu101](https://pypi.python.org/pypi/mxnet-cu101/) with CUDA-10.1 
support.
 - [mxnet-cu100](https://pypi.python.org/pypi/mxnet-cu100/) with CUDA-10.0 
support.
 - [mxnet-cu92](https://pypi.python.org/pypi/mxnet-cu92/) with CUDA-9.2 support.
@@ -39,5 +40,5 @@ Installation
 ------------
 To install:
 ```bash
-pip install mxnet-cu102
+pip install mxnet-cu110
 ```
diff --git a/tools/pip/doc/CU92_ADDITIONAL.md b/tools/pip/doc/CU92_ADDITIONAL.md
index ea2c299..fa4ff28 100644
--- a/tools/pip/doc/CU92_ADDITIONAL.md
+++ b/tools/pip/doc/CU92_ADDITIONAL.md
@@ -18,6 +18,7 @@
 Prerequisites
 -------------
 This package supports Linux and Windows platforms. You may also want to check:
+- [mxnet-cu110](https://pypi.python.org/pypi/mxnet-cu110/) with CUDA-11.0 
support.
 - [mxnet-cu102](https://pypi.python.org/pypi/mxnet-cu102/) with CUDA-10.2 
support.
 - [mxnet-cu90](https://pypi.python.org/pypi/mxnet-cu90/) with CUDA-9.0 support.
 - [mxnet-cu80](https://pypi.python.org/pypi/mxnet-cu80/) with CUDA-8.0 support.
diff --git a/tools/pip/doc/NATIVE_ADDITIONAL.md 
b/tools/pip/doc/NATIVE_ADDITIONAL.md
index f420955..23c592b 100644
--- a/tools/pip/doc/NATIVE_ADDITIONAL.md
+++ b/tools/pip/doc/NATIVE_ADDITIONAL.md
@@ -18,6 +18,7 @@
 Prerequisites
 -------------
 This package supports Linux and Windows platforms. You may also want to check:
+- [mxnet-cu110](https://pypi.python.org/pypi/mxnet-cu110/) with CUDA-11.0 
support.
 - [mxnet-cu102](https://pypi.python.org/pypi/mxnet-cu102/) with CUDA-10.2 
support.
 - [mxnet-cu101](https://pypi.python.org/pypi/mxnet-cu101/) with CUDA-10.1 
support.
 - [mxnet-cu92](https://pypi.python.org/pypi/mxnet-cu92/) with CUDA-9.2 support.
diff --git a/tools/pip/setup.py b/tools/pip/setup.py
index d9183f9..0d28362 100644
--- a/tools/pip/setup.py
+++ b/tools/pip/setup.py
@@ -133,7 +133,9 @@ libraries = []
 if variant == 'CPU':
     libraries.append('openblas')
 else:
-    if variant.startswith('CU102'):
+    if variant.startswith('CU110'):
+        libraries.append('CUDA-11.0')
+    elif variant.startswith('CU102'):
         libraries.append('CUDA-10.2')
     elif variant.startswith('CU101'):
         libraries.append('CUDA-10.1')
@@ -216,3 +218,4 @@ setup(name=package_name,
           'Topic :: Software Development :: Libraries :: Python Modules',
       ],
       url='https://github.com/apache/incubator-mxnet')
+
diff --git a/tools/setup_gpu_build_tools.sh b/tools/setup_gpu_build_tools.sh
index bba3710..6a50dd5 100755
--- a/tools/setup_gpu_build_tools.sh
+++ b/tools/setup_gpu_build_tools.sh
@@ -23,11 +23,22 @@
 # the following environment variables:
 # PATH, CPLUS_INCLUDE_PATH, C_INCLUDE_PATH, LIBRARY_PATH, LD_LIBRARY_PATH, NVCC
 
+set -e
+
 VARIANT=$1
 DEPS_PATH=$2
 
 >&2 echo "Setting CUDA versions for $VARIANT"
-if [[ $VARIANT == cu102* ]]; then
+if [[ $VARIANT == cu110* ]]; then
+    CUDA_VERSION='11.0.221-1'
+    CUDA_PATCH_VERSION='11.2.0.252-1'
+    CUDA_LIBS_VERSION='10.2.1.245-1'
+    CUDA_SOLVER_VERSION='10.6.0.245-1'
+    CUDA_NVTX_VERSION='11.0.167-1'
+    LIBCUDA_VERSION='450.36.06-0ubuntu1'
+    LIBCUDNN_VERSION='8.0.4.30-1+cuda11.0'
+    LIBNCCL_VERSION='2.7.8-1+cuda11.0'
+elif [[ $VARIANT == cu102* ]]; then
     CUDA_VERSION='10.2.89-1'
     CUDA_PATCH_VERSION='10.2.2.89-1'
     LIBCUDA_VERSION='440.33.01-0ubuntu1'
@@ -86,7 +97,7 @@ if [[ $VARIANT == cu* ]]; then
     os_name=$(cat /etc/*release | grep '^ID=' | sed 's/^.*=//g')
     os_version=$(cat /etc/*release | grep VERSION_ID | sed 
's/^.*"\([0-9]*\)\.\([0-9]*\)"/\1\2/g')
     os_id="${os_name}${os_version}"
-    if [[ $CUDA_MAJOR_DASH == 9-* ]] || [[ $CUDA_MAJOR_DASH == 10-* ]]; then
+    if [[ $CUDA_MAJOR_DASH == 9-* ]] || [[ $CUDA_MAJOR_DASH == 10-* ]] || [[ 
$CUDA_MAJOR_DASH == 11-* ]]; then
         os_id="ubuntu1604"
     fi
     export 
PATH=/usr/lib/binutils-2.26/bin/:${PATH}:$DEPS_PATH/usr/local/cuda-$CUDA_MAJOR_VERSION/bin
@@ -98,7 +109,31 @@ if [[ $VARIANT == cu* ]]; then
 fi
 
 # list of debs to download from nvidia
-if [[ $VARIANT == cu102* ]]; then
+if [[ $VARIANT == cu110* ]]; then
+    cuda_files=( \
+      "libcublas-${CUDA_MAJOR_DASH}_${CUDA_PATCH_VERSION}_amd64.deb" \
+      "libcublas-dev-${CUDA_MAJOR_DASH}_${CUDA_PATCH_VERSION}_amd64.deb" \
+      "cuda-cudart-${CUDA_MAJOR_DASH}_${CUDA_VERSION}_amd64.deb" \
+      "cuda-cudart-dev-${CUDA_MAJOR_DASH}_${CUDA_VERSION}_amd64.deb" \
+      "libcurand-${CUDA_MAJOR_DASH}_${CUDA_LIBS_VERSION}_amd64.deb" \
+      "libcurand-dev-${CUDA_MAJOR_DASH}_${CUDA_LIBS_VERSION}_amd64.deb" \
+      "libcufft-${CUDA_MAJOR_DASH}_${CUDA_LIBS_VERSION}_amd64.deb" \
+      "libcufft-dev-${CUDA_MAJOR_DASH}_${CUDA_LIBS_VERSION}_amd64.deb" \
+      "cuda-nvrtc-${CUDA_MAJOR_DASH}_${CUDA_VERSION}_amd64.deb" \
+      "cuda-nvrtc-dev-${CUDA_MAJOR_DASH}_${CUDA_VERSION}_amd64.deb" \
+      "libcusolver-${CUDA_MAJOR_DASH}_${CUDA_SOLVER_VERSION}_amd64.deb" \
+      "libcusolver-dev-${CUDA_MAJOR_DASH}_${CUDA_SOLVER_VERSION}_amd64.deb" \
+      "cuda-nvcc-${CUDA_MAJOR_DASH}_${CUDA_VERSION}_amd64.deb" \
+      "cuda-nvtx-${CUDA_MAJOR_DASH}_${CUDA_NVTX_VERSION}_amd64.deb" \
+      "libcuda1-${LIBCUDA_MAJOR}_${LIBCUDA_VERSION}_amd64.deb" \
+      "cuda-nvprof-${CUDA_MAJOR_DASH}_${CUDA_VERSION}_amd64.deb" \
+      "nvidia-${LIBCUDA_MAJOR}_${LIBCUDA_VERSION}_amd64.deb" \
+    )
+    ml_files=( \
+      "libcudnn${LIBCUDNN_MAJOR}-dev_${LIBCUDNN_VERSION}_amd64.deb" \
+      "libnccl-dev_${LIBNCCL_VERSION}_amd64.deb" \
+    )
+elif [[ $VARIANT == cu102* ]]; then
     cuda_files=( \
       "cuda-core-${CUDA_MAJOR_DASH}_${CUDA_VERSION}_amd64.deb" \
       "libcublas10_${CUDA_PATCH_VERSION}_amd64.deb" \
@@ -312,11 +347,17 @@ if [[ ! -d 
$DEPS_PATH/usr/local/cuda-${CUDA_MAJOR_VERSION} ]]; then
         rm package.deb
     done
 
-    mkdir -p ${prefix}/include
-    mkdir -p ${prefix}/lib
-    cp ${prefix}/usr/include/x86_64-linux-gnu/cudnn_v${LIBCUDNN_MAJOR}.h 
${prefix}/include/cudnn.h
-    ln -s libcudnn_static_v${LIBCUDNN_MAJOR}.a 
${prefix}/usr/lib/x86_64-linux-gnu/libcudnn.a
-    cp ${prefix}/usr/local/cuda-${CUDA_MAJOR_VERSION}/lib64/*.a ${prefix}/lib/
-    cp ${prefix}/usr/include/nccl.h ${prefix}/include/nccl.h
-    ln -s libnccl_static.a ${prefix}/usr/lib/x86_64-linux-gnu/libnccl.a
+    mkdir -p ${prefix}/include ${prefix}/lib ${prefix}/usr/lib/x86_64-linux-gnu
+    if [[ $LIBCUDNN_MAJOR == 8 ]]; then
+        for h in ${prefix}/usr/include/x86_64-linux-gnu/cudnn_*_v8.h; do
+            newfile=$(basename $h | sed 's/_v8//')
+            cp $h ${prefix}/include/$newfile
+        done
+    fi
+    cp -f ${prefix}/usr/include/x86_64-linux-gnu/cudnn_v${LIBCUDNN_MAJOR}.h 
${prefix}/include/cudnn.h
+    ln -sf libcudnn_static_v${LIBCUDNN_MAJOR}.a 
${prefix}/usr/lib/x86_64-linux-gnu/libcudnn.a
+    cp -f ${prefix}/usr/local/cuda-${CUDA_MAJOR_VERSION}/lib64/*.a 
${prefix}/lib/
+    cp -f ${prefix}/usr/include/nccl.h ${prefix}/include/nccl.h
+    ln -sf libnccl_static.a ${prefix}/usr/lib/x86_64-linux-gnu/libnccl.a
 fi
+
diff --git a/tools/staticbuild/README.md b/tools/staticbuild/README.md
index af0abcb..e21abdf 100644
--- a/tools/staticbuild/README.md
+++ b/tools/staticbuild/README.md
@@ -27,7 +27,7 @@ environment variable settings. Here are examples you can run 
with this script:
 ```
 tools/staticbuild/build.sh cu102
 ```
-This would build the mxnet package based on CUDA 10.2. Currently, we support 
variants cpu, native, cu90, cu92, cu100, and cu101. All of these variants 
expect native have MKL-DNN backend enabled. 
+This would build the mxnet package based on CUDA 10.2. Currently, we support 
variants cpu, native, cu92, cu100, cu101, cu102 and cu110. All of these 
variants expect native have MKL-DNN backend enabled. 
 
 ```
 tools/staticbuild/build.sh cpu

Reply via email to