This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new dd1e1c0  Publish triggered by CI
dd1e1c0 is described below

commit dd1e1c031c27540c62007a5f530327f8649c803d
Author: mxnet-ci <mxnet-ci>
AuthorDate: Wed Sep 2 18:44:36 2020 +0000

    Publish triggered by CI
---
 .htaccess                                |   6 +-
 api/python/docs/_modules/mxnet/util.html |  90 +++++++-------
 date.txt                                 |   1 -
 feed.xml                                 |   2 +-
 get_started/download.html                |  12 +-
 get_started/index.html                   | 204 ++++++++++++++++++++++++-------
 6 files changed, 216 insertions(+), 99 deletions(-)

diff --git a/.htaccess b/.htaccess
index 679b613..4444d1d 100644
--- a/.htaccess
+++ b/.htaccess
@@ -19,15 +19,15 @@ RewriteOptions AllowNoSlash
 
   # Web fonts
   ExpiresByType application/font-woff     "access plus 1 month"
-  
+
 </IfModule>
 
-# Set default website version to current stable (v1.6)
+# Set default website version to current stable (v1.7)
 RewriteCond %{REQUEST_URI} !^/versions/
 RewriteCond %{HTTP_REFERER} !mxnet.apache.org
 RewriteCond %{HTTP_REFERER} !mxnet.incubator.apache.org
 RewriteCond %{HTTP_REFERER} !mxnet.cdn.apache.org
-RewriteRule ^(.*)$ /versions/1.6/$1 [r=307,L]
+RewriteRule ^(.*)$ /versions/1.7/$1 [r=307,L]
 
 # Redirect Chinese visitors to Chinese CDN, temporary solution for slow site 
speed in China
 RewriteCond %{ENV:GEOIP_COUNTRY_CODE} ^CN$
diff --git a/api/python/docs/_modules/mxnet/util.html 
b/api/python/docs/_modules/mxnet/util.html
index a8fa4c5..7f3c9a8 100644
--- a/api/python/docs/_modules/mxnet/util.html
+++ b/api/python/docs/_modules/mxnet/util.html
@@ -1236,7 +1236,7 @@
     <span class="k">return</span> <span class="n">free_mem</span><span 
class="o">.</span><span class="n">value</span><span class="p">,</span> <span 
class="n">total_mem</span><span class="o">.</span><span class="n">value</span>
 
 
-<span class="k">def</span> <span class="nf">set_np_shape</span><span 
class="p">(</span><span class="n">active</span><span class="p">):</span>
+<div class="viewcode-block" id="set_np_shape"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.set_np_shape">[docs]</a><span 
class="k">def</span> <span class="nf">set_np_shape</span><span 
class="p">(</span><span class="n">active</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Turns on/off NumPy shape semantics, in 
which `()` represents the shape of scalar tensors,</span>
 <span class="sd">    and tuples with `0` elements, for example, `(0,)`, `(1, 
0, 2)`, represent the shapes</span>
 <span class="sd">    of zero-size tensors. This is turned off by default for 
keeping backward compatibility.</span>
@@ -1280,10 +1280,10 @@
                          <span class="s1">&#39; deactivate both of 
them.&#39;</span><span class="p">)</span>
     <span class="n">prev</span> <span class="o">=</span> <span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">c_int</span><span class="p">()</span>
     <span class="n">check_call</span><span class="p">(</span><span 
class="n">_LIB</span><span class="o">.</span><span 
class="n">MXSetIsNumpyShape</span><span class="p">(</span><span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">c_int</span><span class="p">(</span><span 
class="n">active</span><span class="p">),</span> <span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">byref</span><span class="p">(</span><span class="n">prev</span><span 
class="p">)))</span>
-    <span class="k">return</span> <span class="nb">bool</span><span 
class="p">(</span><span class="n">prev</span><span class="o">.</span><span 
class="n">value</span><span class="p">)</span>
+    <span class="k">return</span> <span class="nb">bool</span><span 
class="p">(</span><span class="n">prev</span><span class="o">.</span><span 
class="n">value</span><span class="p">)</span></div>
 
 
-<span class="k">def</span> <span class="nf">is_np_shape</span><span 
class="p">():</span>
+<div class="viewcode-block" id="is_np_shape"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.is_np_shape">[docs]</a><span 
class="k">def</span> <span class="nf">is_np_shape</span><span 
class="p">():</span>
     <span class="sd">&quot;&quot;&quot;Checks whether the NumPy shape 
semantics is currently turned on.</span>
 <span class="sd">    In NumPy shape semantics, `()` represents the shape of 
scalar tensors,</span>
 <span class="sd">    and tuples with `0` elements, for example, `(0,)`, `(1, 
0, 2)`, represent</span>
@@ -1314,7 +1314,7 @@
 <span class="sd">    &quot;&quot;&quot;</span>
     <span class="n">curr</span> <span class="o">=</span> <span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">c_bool</span><span class="p">()</span>
     <span class="n">check_call</span><span class="p">(</span><span 
class="n">_LIB</span><span class="o">.</span><span 
class="n">MXIsNumpyShape</span><span class="p">(</span><span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">byref</span><span class="p">(</span><span class="n">curr</span><span 
class="p">)))</span>
-    <span class="k">return</span> <span class="n">curr</span><span 
class="o">.</span><span class="n">value</span>
+    <span class="k">return</span> <span class="n">curr</span><span 
class="o">.</span><span class="n">value</span></div>
 
 
 <span class="k">class</span> <span class="nc">_NumpyShapeScope</span><span 
class="p">(</span><span class="nb">object</span><span class="p">):</span>
@@ -1345,7 +1345,7 @@
             <span class="n">set_np_shape</span><span class="p">(</span><span 
class="bp">self</span><span class="o">.</span><span 
class="n">_prev_is_np_shape</span><span class="p">)</span>
 
 
-<span class="k">def</span> <span class="nf">np_shape</span><span 
class="p">(</span><span class="n">active</span><span class="o">=</span><span 
class="kc">True</span><span class="p">):</span>
+<div class="viewcode-block" id="np_shape"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.np_shape">[docs]</a><span 
class="k">def</span> <span class="nf">np_shape</span><span 
class="p">(</span><span class="n">active</span><span class="o">=</span><span 
class="kc">True</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Returns an activated/deactivated NumPy 
shape scope to be used in &#39;with&#39; statement</span>
 <span class="sd">    and captures code that needs the NumPy shape semantics, 
i.e. support of scalar and</span>
 <span class="sd">    zero-size tensors.</span>
@@ -1411,10 +1411,10 @@
 <span class="sd">            assert arg_shapes[0] == ()</span>
 <span class="sd">            assert out_shapes[0] == ()</span>
 <span class="sd">    &quot;&quot;&quot;</span>
-    <span class="k">return</span> <span class="n">_NumpyShapeScope</span><span 
class="p">(</span><span class="n">active</span><span class="p">)</span>
+    <span class="k">return</span> <span class="n">_NumpyShapeScope</span><span 
class="p">(</span><span class="n">active</span><span class="p">)</span></div>
 
 
-<span class="k">def</span> <span class="nf">use_np_shape</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
+<div class="viewcode-block" id="use_np_shape"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.use_np_shape">[docs]</a><span 
class="k">def</span> <span class="nf">use_np_shape</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;A decorator wrapping a function or 
class with activated NumPy-shape semantics.</span>
 <span class="sd">    When `func` is a function, this ensures that the 
execution of the function is scoped with NumPy</span>
 <span class="sd">    shape semantics, such as the support for zero-dim and 
zero size tensors. When</span>
@@ -1485,7 +1485,7 @@
         <span class="k">return</span> <span class="n">_with_np_shape</span>
     <span class="k">else</span><span class="p">:</span>
         <span class="k">raise</span> <span class="ne">TypeError</span><span 
class="p">(</span><span class="s1">&#39;use_np_shape can only decorate classes 
and callable objects, &#39;</span>
-                        <span class="s1">&#39;while received a </span><span 
class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span class="nb">str</span><span 
class="p">(</span><span class="nb">type</span><span class="p">(</span><span 
class="n">func</span><span class="p">))))</span>
+                        <span class="s1">&#39;while received a </span><span 
class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span class="nb">str</span><span 
class="p">(</span><span class="nb">type</span><span class="p">(</span><span 
class="n">func</span><span class="p">))))</span></div>
 
 
 <span class="k">def</span> <span class="nf">_sanity_check_params</span><span 
class="p">(</span><span class="n">func_name</span><span class="p">,</span> 
<span class="n">unsupported_params</span><span class="p">,</span> <span 
class="n">param_dict</span><span class="p">):</span>
@@ -1495,7 +1495,7 @@
                                       <span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span 
class="n">func_name</span><span class="p">,</span> <span 
class="n">param_name</span><span class="p">))</span>
 
 
-<span class="k">def</span> <span class="nf">set_module</span><span 
class="p">(</span><span class="n">module</span><span class="p">):</span>
+<div class="viewcode-block" id="set_module"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.set_module">[docs]</a><span 
class="k">def</span> <span class="nf">set_module</span><span 
class="p">(</span><span class="n">module</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Decorator for overriding __module__ on 
a function or class.</span>
 
 <span class="sd">    Example usage::</span>
@@ -1510,7 +1510,7 @@
         <span class="k">if</span> <span class="n">module</span> <span 
class="ow">is</span> <span class="ow">not</span> <span 
class="kc">None</span><span class="p">:</span>
             <span class="n">func</span><span class="o">.</span><span 
class="vm">__module__</span> <span class="o">=</span> <span 
class="n">module</span>
         <span class="k">return</span> <span class="n">func</span>
-    <span class="k">return</span> <span class="n">decorator</span>
+    <span class="k">return</span> <span class="n">decorator</span></div>
 
 
 <span class="k">class</span> <span class="nc">_NumpyArrayScope</span><span 
class="p">(</span><span class="nb">object</span><span class="p">):</span>
@@ -1538,7 +1538,7 @@
         <span class="n">_NumpyArrayScope</span><span class="o">.</span><span 
class="n">_current</span><span class="o">.</span><span class="n">value</span> 
<span class="o">=</span> <span class="bp">self</span><span 
class="o">.</span><span class="n">_old_scope</span>
 
 
-<span class="k">def</span> <span class="nf">np_array</span><span 
class="p">(</span><span class="n">active</span><span class="o">=</span><span 
class="kc">True</span><span class="p">):</span>
+<div class="viewcode-block" id="np_array"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.np_array">[docs]</a><span 
class="k">def</span> <span class="nf">np_array</span><span 
class="p">(</span><span class="n">active</span><span class="o">=</span><span 
class="kc">True</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Returns an activated/deactivated 
NumPy-array scope to be used in &#39;with&#39; statement</span>
 <span class="sd">    and captures code that needs the NumPy-array 
semantics.</span>
 
@@ -1564,10 +1564,10 @@
 <span class="sd">    _NumpyShapeScope</span>
 <span class="sd">        A scope object for wrapping the code w/ or w/o 
NumPy-shape semantics.</span>
 <span class="sd">    &quot;&quot;&quot;</span>
-    <span class="k">return</span> <span class="n">_NumpyArrayScope</span><span 
class="p">(</span><span class="n">active</span><span class="p">)</span>
+    <span class="k">return</span> <span class="n">_NumpyArrayScope</span><span 
class="p">(</span><span class="n">active</span><span class="p">)</span></div>
 
 
-<div class="viewcode-block" id="is_np_array"><a class="viewcode-back" 
href="../../api/legacy/image/index.html#mxnet.image.is_np_array">[docs]</a><span
 class="k">def</span> <span class="nf">is_np_array</span><span 
class="p">():</span>
+<div class="viewcode-block" id="is_np_array"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.is_np_array">[docs]</a><span 
class="k">def</span> <span class="nf">is_np_array</span><span 
class="p">():</span>
     <span class="sd">&quot;&quot;&quot;Checks whether the NumPy-array 
semantics is currently turned on.</span>
 <span class="sd">    This is currently used in Gluon for checking whether an 
array of type `mxnet.numpy.ndarray`</span>
 <span class="sd">    or `mx.nd.NDArray` should be created. For example, at the 
time when a parameter</span>
@@ -1590,7 +1590,7 @@
         <span class="n">_NumpyArrayScope</span><span class="o">.</span><span 
class="n">_current</span><span class="p">,</span> <span 
class="s2">&quot;value&quot;</span><span class="p">)</span> <span 
class="k">else</span> <span class="kc">False</span></div>
 
 
-<span class="k">def</span> <span class="nf">use_np_array</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
+<div class="viewcode-block" id="use_np_array"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.use_np_array">[docs]</a><span 
class="k">def</span> <span class="nf">use_np_array</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;A decorator wrapping Gluon `Block`s and 
all its methods, properties, and static functions</span>
 <span class="sd">    with the semantics of NumPy-array, which means that where 
ndarrays are created,</span>
 <span class="sd">    `mxnet.numpy.ndarray`s should be created, instead of 
legacy ndarrays of type `mx.nd.NDArray`.</span>
@@ -1669,10 +1669,10 @@
         <span class="k">return</span> <span class="n">_with_np_array</span>
     <span class="k">else</span><span class="p">:</span>
         <span class="k">raise</span> <span class="ne">TypeError</span><span 
class="p">(</span><span class="s1">&#39;use_np_array can only decorate classes 
and callable objects, &#39;</span>
-                        <span class="s1">&#39;while received a </span><span 
class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span class="nb">str</span><span 
class="p">(</span><span class="nb">type</span><span class="p">(</span><span 
class="n">func</span><span class="p">))))</span>
+                        <span class="s1">&#39;while received a </span><span 
class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span class="nb">str</span><span 
class="p">(</span><span class="nb">type</span><span class="p">(</span><span 
class="n">func</span><span class="p">))))</span></div>
 
 
-<span class="k">def</span> <span class="nf">use_np</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
+<div class="viewcode-block" id="use_np"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.use_np">[docs]</a><span 
class="k">def</span> <span class="nf">use_np</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;A convenience decorator for wrapping 
user provided functions and classes in the scope of</span>
 <span class="sd">    both NumPy-shape and NumPy-array semantics, which means 
that (1) empty tuples `()` and tuples</span>
 <span class="sd">    with zeros, such as `(0, 1)`, `(1, 0, 2)`, will be 
treated as scalar tensors&#39; shapes and</span>
@@ -1732,10 +1732,10 @@
 <span class="sd">    Function or class</span>
 <span class="sd">        A function or class wrapped in the Numpy-shape and 
NumPy-array scope.</span>
 <span class="sd">    &quot;&quot;&quot;</span>
-    <span class="k">return</span> <span class="n">use_np_shape</span><span 
class="p">(</span><span class="n">use_np_array</span><span 
class="p">(</span><span class="n">func</span><span class="p">))</span>
+    <span class="k">return</span> <span class="n">use_np_shape</span><span 
class="p">(</span><span class="n">use_np_array</span><span 
class="p">(</span><span class="n">func</span><span class="p">))</span></div>
 
 
-<span class="k">def</span> <span class="nf">np_ufunc_legal_option</span><span 
class="p">(</span><span class="n">key</span><span class="p">,</span> <span 
class="n">value</span><span class="p">):</span>
+<div class="viewcode-block" id="np_ufunc_legal_option"><a 
class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.np_ufunc_legal_option">[docs]</a><span
 class="k">def</span> <span class="nf">np_ufunc_legal_option</span><span 
class="p">(</span><span class="n">key</span><span class="p">,</span> <span 
class="n">value</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Checking if ufunc arguments are legal 
inputs</span>
 
 <span class="sd">    Parameters</span>
@@ -1767,10 +1767,10 @@
                               <span class="s1">&#39;float16&#39;</span><span 
class="p">,</span> <span class="s1">&#39;float32&#39;</span><span 
class="p">,</span> <span class="s1">&#39;float64&#39;</span><span 
class="p">]))</span>
     <span class="k">elif</span> <span class="n">key</span> <span 
class="o">==</span> <span class="s1">&#39;subok&#39;</span><span 
class="p">:</span>
         <span class="k">return</span> <span class="nb">isinstance</span><span 
class="p">(</span><span class="n">value</span><span class="p">,</span> <span 
class="nb">bool</span><span class="p">)</span>
-    <span class="k">return</span> <span class="kc">False</span>
+    <span class="k">return</span> <span class="kc">False</span></div>
 
 
-<span class="k">def</span> <span class="nf">wrap_np_unary_func</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
+<div class="viewcode-block" id="wrap_np_unary_func"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.wrap_np_unary_func">[docs]</a><span 
class="k">def</span> <span class="nf">wrap_np_unary_func</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;A convenience decorator for wrapping 
numpy-compatible unary ufuncs to provide uniform</span>
 <span class="sd">    error handling.</span>
 
@@ -1800,10 +1800,10 @@
                     <span class="k">raise</span> <span 
class="ne">TypeError</span><span class="p">(</span><span 
class="s2">&quot;</span><span class="si">{}</span><span 
class="s2">=</span><span class="si">{}</span><span class="s2"> not understood 
for operator </span><span class="si">{}</span><span class="s2">&quot;</span>
                                     <span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span class="n">key</span><span 
class="p">,</span> <span class="n">value</span><span class="p">,</span> <span 
class="n">func</span><span class="o">.</span><span 
class="vm">__name__</span><span class="p">))</span>
         <span class="k">return</span> <span class="n">func</span><span 
class="p">(</span><span class="n">x</span><span class="p">,</span> <span 
class="n">out</span><span class="o">=</span><span class="n">out</span><span 
class="p">)</span>
-    <span class="k">return</span> <span class="n">_wrap_np_unary_func</span>
+    <span class="k">return</span> <span 
class="n">_wrap_np_unary_func</span></div>
 
 
-<span class="k">def</span> <span class="nf">wrap_np_binary_func</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
+<div class="viewcode-block" id="wrap_np_binary_func"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.wrap_np_binary_func">[docs]</a><span 
class="k">def</span> <span class="nf">wrap_np_binary_func</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;A convenience decorator for wrapping 
numpy-compatible binary ufuncs to provide uniform</span>
 <span class="sd">    error handling.</span>
 
@@ -1831,11 +1831,11 @@
                     <span class="c1"># otherwise raise TypeError with not 
understood error message</span>
                     <span class="k">raise</span> <span 
class="ne">TypeError</span><span class="p">(</span><span 
class="s2">&quot;</span><span class="si">{}</span><span class="s2"> 
</span><span class="si">{}</span><span class="s2"> not 
understood&quot;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span class="n">key</span><span 
class="p">,</span> <span class="n">value</span><span class="p">))</span>
         <span class="k">return</span> <span class="n">func</span><span 
class="p">(</span><span class="n">x1</span><span class="p">,</span> <span 
class="n">x2</span><span class="p">,</span> <span class="n">out</span><span 
class="o">=</span><span class="n">out</span><span class="p">)</span>
-    <span class="k">return</span> <span class="n">_wrap_np_binary_func</span>
+    <span class="k">return</span> <span 
class="n">_wrap_np_binary_func</span></div>
 
 
 <span class="c1"># pylint: disable=exec-used</span>
-<span class="k">def</span> <span class="nf">numpy_fallback</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
+<div class="viewcode-block" id="numpy_fallback"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.numpy_fallback">[docs]</a><span 
class="k">def</span> <span class="nf">numpy_fallback</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;decorator for falling back to offical 
numpy for a specific function&quot;&quot;&quot;</span>
     <span class="k">def</span> <span class="nf">get_ctx</span><span 
class="p">(</span><span class="n">ctx</span><span class="p">,</span> <span 
class="n">new_ctx</span><span class="p">):</span>
         <span class="k">if</span> <span class="n">ctx</span> <span 
class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
@@ -1917,7 +1917,7 @@
         <span class="n">ret</span> <span class="o">=</span> <span 
class="n">_as_mx_np_array</span><span class="p">(</span><span 
class="n">ret</span><span class="p">,</span> <span class="n">ctx</span><span 
class="o">=</span><span class="n">ctx</span><span class="p">)</span>
         <span class="k">return</span> <span class="n">ret</span>
 
-    <span class="k">return</span> <span 
class="n">_fallback_to_official_np</span>
+    <span class="k">return</span> <span 
class="n">_fallback_to_official_np</span></div>
 <span class="c1"># pylint: enable=exec-used</span>
 
 
@@ -1947,7 +1947,7 @@
     <span class="k">return</span> <span class="n">cur_state</span>
 
 
-<span class="k">def</span> <span class="nf">set_np</span><span 
class="p">(</span><span class="n">shape</span><span class="o">=</span><span 
class="kc">True</span><span class="p">,</span> <span 
class="n">array</span><span class="o">=</span><span class="kc">True</span><span 
class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span 
class="kc">False</span><span class="p">):</span>
+<div class="viewcode-block" id="set_np"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.set_np">[docs]</a><span 
class="k">def</span> <span class="nf">set_np</span><span 
class="p">(</span><span class="n">shape</span><span class="o">=</span><span 
class="kc">True</span><span class="p">,</span> <span 
class="n">array</span><span class="o">=</span><span class="kc">True</span><span 
class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span 
class="kc">False< [...]
     <span class="sd">&quot;&quot;&quot;Setting NumPy shape and array semantics 
at the same time.</span>
 <span class="sd">    It is required to keep NumPy shape semantics active while 
activating NumPy array semantics.</span>
 <span class="sd">    Deactivating NumPy shape semantics while NumPy array 
semantics is still active is not allowed.</span>
@@ -2031,18 +2031,18 @@
         <span class="k">raise</span> <span class="ne">ValueError</span><span 
class="p">(</span><span class="s1">&#39;NumPy Shape semantics is required in 
using NumPy array semantics.&#39;</span><span class="p">)</span>
     <span class="n">_set_np_array</span><span class="p">(</span><span 
class="n">array</span><span class="p">)</span>
     <span class="n">set_np_shape</span><span class="p">(</span><span 
class="n">shape</span><span class="p">)</span>
-    <span class="n">set_np_default_dtype</span><span class="p">(</span><span 
class="n">dtype</span><span class="p">)</span>
+    <span class="n">set_np_default_dtype</span><span class="p">(</span><span 
class="n">dtype</span><span class="p">)</span></div>
 
 
-<span class="k">def</span> <span class="nf">reset_np</span><span 
class="p">():</span>
+<div class="viewcode-block" id="reset_np"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.reset_np">[docs]</a><span 
class="k">def</span> <span class="nf">reset_np</span><span class="p">():</span>
     <span class="sd">&quot;&quot;&quot;Deactivate NumPy shape and array and 
deafult dtype semantics at the same time.&quot;&quot;&quot;</span>
-    <span class="n">set_np</span><span class="p">(</span><span 
class="n">shape</span><span class="o">=</span><span 
class="kc">False</span><span class="p">,</span> <span 
class="n">array</span><span class="o">=</span><span 
class="kc">False</span><span class="p">,</span> <span 
class="n">dtype</span><span class="o">=</span><span 
class="kc">False</span><span class="p">)</span>
+    <span class="n">set_np</span><span class="p">(</span><span 
class="n">shape</span><span class="o">=</span><span 
class="kc">False</span><span class="p">,</span> <span 
class="n">array</span><span class="o">=</span><span 
class="kc">False</span><span class="p">,</span> <span 
class="n">dtype</span><span class="o">=</span><span 
class="kc">False</span><span class="p">)</span></div>
 
 
 <span class="n">_CUDA_SUCCESS</span> <span class="o">=</span> <span 
class="mi">0</span>
 
 
-<span class="k">def</span> <span 
class="nf">get_cuda_compute_capability</span><span class="p">(</span><span 
class="n">ctx</span><span class="p">):</span>
+<div class="viewcode-block" id="get_cuda_compute_capability"><a 
class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.get_cuda_compute_capability">[docs]</a><span
 class="k">def</span> <span class="nf">get_cuda_compute_capability</span><span 
class="p">(</span><span class="n">ctx</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Returns the cuda compute capability of 
the input `ctx`.</span>
 
 <span class="sd">    Parameters</span>
@@ -2097,10 +2097,10 @@
         <span class="n">cuda</span><span class="o">.</span><span 
class="n">cuGetErrorString</span><span class="p">(</span><span 
class="n">ret</span><span class="p">,</span> <span class="n">ctypes</span><span 
class="o">.</span><span class="n">byref</span><span class="p">(</span><span 
class="n">error_str</span><span class="p">))</span>
         <span class="k">raise</span> <span class="ne">RuntimeError</span><span 
class="p">(</span><span class="s1">&#39;cuDeviceComputeCapability failed with 
error code </span><span class="si">{}</span><span class="s1">: </span><span 
class="si">{}</span><span class="s1">&#39;</span>
                            <span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span class="n">ret</span><span 
class="p">,</span> <span class="n">error_str</span><span 
class="o">.</span><span class="n">value</span><span class="o">.</span><span 
class="n">decode</span><span class="p">()))</span>
-    <span class="k">return</span> <span class="n">cc_major</span><span 
class="o">.</span><span class="n">value</span> <span class="o">*</span> <span 
class="mi">10</span> <span class="o">+</span> <span 
class="n">cc_minor</span><span class="o">.</span><span class="n">value</span>
+    <span class="k">return</span> <span class="n">cc_major</span><span 
class="o">.</span><span class="n">value</span> <span class="o">*</span> <span 
class="mi">10</span> <span class="o">+</span> <span 
class="n">cc_minor</span><span class="o">.</span><span 
class="n">value</span></div>
 
 
-<span class="k">def</span> <span class="nf">default_array</span><span 
class="p">(</span><span class="n">source_array</span><span class="p">,</span> 
<span class="n">ctx</span><span class="o">=</span><span 
class="kc">None</span><span class="p">,</span> <span 
class="n">dtype</span><span class="o">=</span><span class="kc">None</span><span 
class="p">):</span>
+<div class="viewcode-block" id="default_array"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.default_array">[docs]</a><span 
class="k">def</span> <span class="nf">default_array</span><span 
class="p">(</span><span class="n">source_array</span><span class="p">,</span> 
<span class="n">ctx</span><span class="o">=</span><span 
class="kc">None</span><span class="p">,</span> <span 
class="n">dtype</span><span class="o">=</span><span class="kc">None</span><span 
class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Creates an array from any object 
exposing the default(nd or np) array interface.</span>
 
 <span class="sd">    Parameters</span>
@@ -2124,7 +2124,7 @@
     <span class="k">if</span> <span class="n">is_np_array</span><span 
class="p">():</span>
         <span class="k">return</span> <span class="n">_mx_np</span><span 
class="o">.</span><span class="n">array</span><span class="p">(</span><span 
class="n">source_array</span><span class="p">,</span> <span 
class="n">ctx</span><span class="o">=</span><span class="n">ctx</span><span 
class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span 
class="n">dtype</span><span class="p">)</span>
     <span class="k">else</span><span class="p">:</span>
-        <span class="k">return</span> <span class="n">_mx_nd</span><span 
class="o">.</span><span class="n">array</span><span class="p">(</span><span 
class="n">source_array</span><span class="p">,</span> <span 
class="n">ctx</span><span class="o">=</span><span class="n">ctx</span><span 
class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span 
class="n">dtype</span><span class="p">)</span>
+        <span class="k">return</span> <span class="n">_mx_nd</span><span 
class="o">.</span><span class="n">array</span><span class="p">(</span><span 
class="n">source_array</span><span class="p">,</span> <span 
class="n">ctx</span><span class="o">=</span><span class="n">ctx</span><span 
class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span 
class="n">dtype</span><span class="p">)</span></div>
 
 <span class="k">class</span> <span 
class="nc">_NumpyDefaultDtypeScope</span><span class="p">(</span><span 
class="nb">object</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Scope for managing NumPy default dtype 
semantics.</span>
@@ -2154,7 +2154,7 @@
            <span class="bp">self</span><span class="o">.</span><span 
class="n">_prev_is_np_default_dtype</span> <span class="o">!=</span> <span 
class="bp">self</span><span class="o">.</span><span 
class="n">_enter_is_np_default_dtype</span><span class="p">:</span>
             <span class="n">set_np_default_dtype</span><span 
class="p">(</span><span class="bp">self</span><span class="o">.</span><span 
class="n">_prev_is_np_default_dtype</span><span class="p">)</span>
 
-<span class="k">def</span> <span class="nf">np_default_dtype</span><span 
class="p">(</span><span class="n">active</span><span class="o">=</span><span 
class="kc">True</span><span class="p">):</span>
+<div class="viewcode-block" id="np_default_dtype"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.np_default_dtype">[docs]</a><span 
class="k">def</span> <span class="nf">np_default_dtype</span><span 
class="p">(</span><span class="n">active</span><span class="o">=</span><span 
class="kc">True</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Returns an activated/deactivated 
NumPy-default_dtype scope to be used in &#39;with&#39; statement</span>
 <span class="sd">    and captures code that needs the NumPy default dtype 
semantics. i.e. default dtype is float64.</span>
 
@@ -2186,9 +2186,9 @@
 <span class="sd">            assert arr.dtype == &#39;float32&#39;</span>
 
 <span class="sd">    &quot;&quot;&quot;</span>
-    <span class="k">return</span> <span 
class="n">_NumpyDefaultDtypeScope</span><span class="p">(</span><span 
class="n">active</span><span class="p">)</span>
+    <span class="k">return</span> <span 
class="n">_NumpyDefaultDtypeScope</span><span class="p">(</span><span 
class="n">active</span><span class="p">)</span></div>
 
-<span class="k">def</span> <span class="nf">use_np_default_dtype</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
+<div class="viewcode-block" id="use_np_default_dtype"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.use_np_default_dtype">[docs]</a><span
 class="k">def</span> <span class="nf">use_np_default_dtype</span><span 
class="p">(</span><span class="n">func</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;A decorator wrapping a function or 
class with activated NumPy-default_dtype semantics.</span>
 <span class="sd">    When `func` is a function, this ensures that the 
execution of the function is scoped with NumPy</span>
 <span class="sd">    default dtype semantics, with the support for float64 as 
default dtype.</span>
@@ -2258,9 +2258,9 @@
         <span class="k">return</span> <span 
class="n">_with_np_default_dtype</span>
     <span class="k">else</span><span class="p">:</span>
         <span class="k">raise</span> <span class="ne">TypeError</span><span 
class="p">(</span><span class="s1">&#39;use_np_default_dtype can only decorate 
classes and callable objects, &#39;</span>
-                        <span class="s1">&#39;while received a </span><span 
class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span class="nb">str</span><span 
class="p">(</span><span class="nb">type</span><span class="p">(</span><span 
class="n">func</span><span class="p">))))</span>
+                        <span class="s1">&#39;while received a </span><span 
class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span 
class="n">format</span><span class="p">(</span><span class="nb">str</span><span 
class="p">(</span><span class="nb">type</span><span class="p">(</span><span 
class="n">func</span><span class="p">))))</span></div>
 
-<span class="k">def</span> <span class="nf">is_np_default_dtype</span><span 
class="p">():</span>
+<div class="viewcode-block" id="is_np_default_dtype"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.is_np_default_dtype">[docs]</a><span 
class="k">def</span> <span class="nf">is_np_default_dtype</span><span 
class="p">():</span>
     <span class="sd">&quot;&quot;&quot;Checks whether the NumPy default dtype 
semantics is currently turned on.</span>
 <span class="sd">    In NumPy default dtype semantics, default dtype is 
float64.</span>
 
@@ -2290,9 +2290,9 @@
 <span class="sd">    &quot;&quot;&quot;</span>
     <span class="n">curr</span> <span class="o">=</span> <span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">c_bool</span><span class="p">()</span>
     <span class="n">check_call</span><span class="p">(</span><span 
class="n">_LIB</span><span class="o">.</span><span 
class="n">MXIsNumpyDefaultDtype</span><span class="p">(</span><span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">byref</span><span class="p">(</span><span class="n">curr</span><span 
class="p">)))</span>
-    <span class="k">return</span> <span class="n">curr</span><span 
class="o">.</span><span class="n">value</span>
+    <span class="k">return</span> <span class="n">curr</span><span 
class="o">.</span><span class="n">value</span></div>
 
-<span class="k">def</span> <span class="nf">set_np_default_dtype</span><span 
class="p">(</span><span class="n">is_np_default_dtype</span><span 
class="o">=</span><span class="kc">True</span><span class="p">):</span>  <span 
class="c1"># pylint: disable=redefined-outer-name</span>
+<div class="viewcode-block" id="set_np_default_dtype"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.set_np_default_dtype">[docs]</a><span
 class="k">def</span> <span class="nf">set_np_default_dtype</span><span 
class="p">(</span><span class="n">is_np_default_dtype</span><span 
class="o">=</span><span class="kc">True</span><span class="p">):</span>  <span 
class="c1"># pylint: disable=redefined-outer-name</span>
     <span class="sd">&quot;&quot;&quot;Turns on/off NumPy default dtype 
semantics, because mxnet.numpy.ndarray use</span>
 <span class="sd">    32 bit data storage as default (e.g. float32 and int 32) 
while offical NumPy use</span>
 <span class="sd">    64 bit data storage as default (e.g. float64 and 
int64).</span>
@@ -2330,10 +2330,10 @@
             <span class="n">_set_np_default_dtype_logged</span> <span 
class="o">=</span> <span class="kc">True</span>
     <span class="n">prev</span> <span class="o">=</span> <span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">c_bool</span><span class="p">()</span>
     <span class="n">check_call</span><span class="p">(</span><span 
class="n">_LIB</span><span class="o">.</span><span 
class="n">MXSetIsNumpyDefaultDtype</span><span class="p">(</span><span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">c_bool</span><span class="p">(</span><span 
class="n">is_np_default_dtype</span><span class="p">),</span> <span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">byref</span><span class="p">(</span><span class="n">prev</span><sp 
[...]
-    <span class="k">return</span> <span class="n">prev</span><span 
class="o">.</span><span class="n">value</span>
+    <span class="k">return</span> <span class="n">prev</span><span 
class="o">.</span><span class="n">value</span></div>
 
 
-<span class="k">def</span> <span class="nf">getenv</span><span 
class="p">(</span><span class="n">name</span><span class="p">):</span>
+<div class="viewcode-block" id="getenv"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.getenv">[docs]</a><span 
class="k">def</span> <span class="nf">getenv</span><span 
class="p">(</span><span class="n">name</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Get the setting of an environment 
variable from the C Runtime.</span>
 
 <span class="sd">    Parameters</span>
@@ -2348,10 +2348,10 @@
 <span class="sd">    &quot;&quot;&quot;</span>
     <span class="n">ret</span> <span class="o">=</span> <span 
class="n">ctypes</span><span class="o">.</span><span 
class="n">c_char_p</span><span class="p">()</span>
     <span class="n">check_call</span><span class="p">(</span><span 
class="n">_LIB</span><span class="o">.</span><span 
class="n">MXGetEnv</span><span class="p">(</span><span 
class="n">c_str</span><span class="p">(</span><span class="n">name</span><span 
class="p">),</span> <span class="n">ctypes</span><span class="o">.</span><span 
class="n">byref</span><span class="p">(</span><span class="n">ret</span><span 
class="p">)))</span>
-    <span class="k">return</span> <span class="kc">None</span> <span 
class="k">if</span> <span class="n">ret</span><span class="o">.</span><span 
class="n">value</span> <span class="ow">is</span> <span class="kc">None</span> 
<span class="k">else</span> <span class="n">py_str</span><span 
class="p">(</span><span class="n">ret</span><span class="o">.</span><span 
class="n">value</span><span class="p">)</span>
+    <span class="k">return</span> <span class="kc">None</span> <span 
class="k">if</span> <span class="n">ret</span><span class="o">.</span><span 
class="n">value</span> <span class="ow">is</span> <span class="kc">None</span> 
<span class="k">else</span> <span class="n">py_str</span><span 
class="p">(</span><span class="n">ret</span><span class="o">.</span><span 
class="n">value</span><span class="p">)</span></div>
 
 
-<span class="k">def</span> <span class="nf">setenv</span><span 
class="p">(</span><span class="n">name</span><span class="p">,</span> <span 
class="n">value</span><span class="p">):</span>
+<div class="viewcode-block" id="setenv"><a class="viewcode-back" 
href="../../api/util/index.html#mxnet.util.setenv">[docs]</a><span 
class="k">def</span> <span class="nf">setenv</span><span 
class="p">(</span><span class="n">name</span><span class="p">,</span> <span 
class="n">value</span><span class="p">):</span>
     <span class="sd">&quot;&quot;&quot;Set an environment variable in the C 
Runtime.</span>
 
 <span class="sd">    Parameters</span>
@@ -2362,7 +2362,7 @@
 <span class="sd">        The desired value to set the environment value 
to</span>
 <span class="sd">    &quot;&quot;&quot;</span>
     <span class="n">passed_value</span> <span class="o">=</span> <span 
class="kc">None</span> <span class="k">if</span> <span class="n">value</span> 
<span class="ow">is</span> <span class="kc">None</span> <span 
class="k">else</span> <span class="n">c_str</span><span class="p">(</span><span 
class="n">value</span><span class="p">)</span>
-    <span class="n">check_call</span><span class="p">(</span><span 
class="n">_LIB</span><span class="o">.</span><span 
class="n">MXSetEnv</span><span class="p">(</span><span 
class="n">c_str</span><span class="p">(</span><span class="n">name</span><span 
class="p">),</span> <span class="n">passed_value</span><span class="p">))</span>
+    <span class="n">check_call</span><span class="p">(</span><span 
class="n">_LIB</span><span class="o">.</span><span 
class="n">MXSetEnv</span><span class="p">(</span><span 
class="n">c_str</span><span class="p">(</span><span class="n">name</span><span 
class="p">),</span> <span class="n">passed_value</span><span 
class="p">))</span></div>
 </pre></div>
 
         <hr class="feedback-hr-top" />
diff --git a/date.txt b/date.txt
deleted file mode 100644
index 17ee91f..0000000
--- a/date.txt
+++ /dev/null
@@ -1 +0,0 @@
-Wed Sep  2 12:43:32 UTC 2020
diff --git a/feed.xml b/feed.xml
index 5ae4c29..709de34 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8"?><feed 
xmlns="http://www.w3.org/2005/Atom"; ><generator uri="https://jekyllrb.com/"; 
version="4.0.0">Jekyll</generator><link 
href="https://mxnet.apache.org/feed.xml"; rel="self" type="application/atom+xml" 
/><link href="https://mxnet.apache.org/"; rel="alternate" type="text/html" 
/><updated>2020-09-02T12:33:47+00:00</updated><id>https://mxnet.apache.org/feed.xml</id><title
 type="html">Apache MXNet</title><subtitle>A flexible and efficient library for 
deep [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8"?><feed 
xmlns="http://www.w3.org/2005/Atom"; ><generator uri="https://jekyllrb.com/"; 
version="4.0.0">Jekyll</generator><link 
href="https://mxnet.apache.org/feed.xml"; rel="self" type="application/atom+xml" 
/><link href="https://mxnet.apache.org/"; rel="alternate" type="text/html" 
/><updated>2020-09-02T18:33:14+00:00</updated><id>https://mxnet.apache.org/feed.xml</id><title
 type="html">Apache MXNet</title><subtitle>A flexible and efficient library for 
deep [...]
\ No newline at end of file
diff --git a/get_started/download.html b/get_started/download.html
index f00e5bc..34c8a7b 100644
--- a/get_started/download.html
+++ b/get_started/download.html
@@ -293,10 +293,16 @@ encouraged to contribute to our development version on
   </thead>
   <tbody>
     <tr>
+      <td>1.7.0</td>
+      <td><a 
href="http://www.apache.org/dyn/closer.lua?filename=incubator/mxnet/1.7.0/apache-mxnet-src-1.7.0-incubating.tar.gz&amp;action=download";>Download</a></td>
+      <td><a 
href="https://downloads.apache.org/incubator/mxnet/1.7.0/apache-mxnet-src-1.7.0-incubating.tar.gz.asc";>Download</a></td>
+      <td><a 
href="https://downloads.apache.org/incubator/mxnet/1.7.0/apache-mxnet-src-1.7.0-incubating.tar.gz.sha512";>Download</a></td>
+    </tr>
+    <tr>
       <td>1.6.0</td>
-      <td><a 
href="https://www.apache.org/dyn/closer.cgi/incubator/mxnet/1.6.0/apache-mxnet-src-1.6.0-incubating.tar.gz";>Download</a></td>
-      <td><a 
href="https://downloads.apache.org/incubator/mxnet/1.6.0/apache-mxnet-src-1.6.0-incubating.tar.gz.asc";>Download</a></td>
-      <td><a 
href="https://downloads.apache.org/incubator/mxnet/1.6.0/apache-mxnet-src-1.6.0-incubating.tar.gz.sha512";>Download</a></td>
+      <td><a 
href="https://archive.apache.org/dist/incubator/mxnet/1.6.0/apache-mxnet-src-1.6.0-incubating.tar.gz";>Download</a></td>
+      <td><a 
href="https://archive.apache.org/dist/incubator/mxnet/1.6.0/apache-mxnet-src-1.6.0-incubating.tar.gz.asc";>Download</a></td>
+      <td><a 
href="https://archive.apache.org/dist/incubator/mxnet/1.6.0/apache-mxnet-src-1.6.0-incubating.tar.gz.sha512";>Download</a></td>
     </tr>
     <tr>
       <td>1.5.1</td>
diff --git a/get_started/index.html b/get_started/index.html
index e888688..4afce11 100644
--- a/get_started/index.html
+++ b/get_started/index.html
@@ -289,7 +289,7 @@ if(!(window.doNotTrack === "1" || navigator.doNotTrack === 
"1" || navigator.doNo
 <script>
     /** Defaults **/
     /** See options.js for the full ugly script **/
-    var versionSelect = defaultVersion = 'v1.6.0';
+    var versionSelect = defaultVersion = 'v1.7.0';
     var platformSelect = 'linux';
     var languageSelect = 'python';
     var processorSelect = 'cpu';
@@ -312,13 +312,14 @@ if(!(window.doNotTrack === "1" || navigator.doNotTrack 
=== "1" || navigator.doNo
             <div class="col-9 install-right">
                 <div class="dropdown" id="version-dropdown-container">
                     <button class="current-version dropbtn btn" type="button" 
data-toggle="dropdown">
-                        v1.6.0
+                        v1.7.0
                         <svg class="dropdown-caret" viewBox="0 0 32 32" 
class="icon icon-caret-bottom" aria-hidden="true">
                             <path class="dropdown-caret-path" d="M24 
11.305l-7.997 11.39L8 11.305z"></path>
                         </svg>
                     </button>
                     <ul class="opt-group version-dropdown">
-                        <li class="opt active versions"><a 
href="#">v1.6.0</a></li>
+                        <li class="opt active versions"><a 
href="#">v1.7.0</a></li>
+                        <li class="opt versions"><a href="#">v1.6.0</a></li>
                         <li class="opt versions"><a href="#">v1.5.1</a></li>
                         <li class="opt versions"><a href="#">v1.4.1</a></li>
                         <li class="opt versions"><a href="#">v1.3.1</a></li>
@@ -447,14 +448,43 @@ page</a>.</p>
 
 <p>Run the following command:</p>
 
-<div class="v1-6-0">
+<div class="v1-7-0">
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet</code></pre></figure>
 
+Start from 1.7.0 release, oneDNN(previously known as: MKL-DNN/DNNL) is enabled
+in pip packages by default.
+
+oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
+performance library of basic building blocks for deep learning applications.
+The library is optimized for Intel Architecture Processors, Intel Processor
+Graphics and Xe architecture-based Graphics. Support for other architectures
+such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is
+experimental.
+
+oneDNN is intended for deep learning applications and framework developers
+interested in improving application performance on Intel CPUs and GPUs, more
+details can be found <a href="https://github.com/oneapi-src/oneDNN";>here</a>.
+
+You can find performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
+
+To install native MXNet without oneDNN, run the following command:
+
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install 
</span>mxnet-native</code></pre></figure>
+
+</div>
+<p><!-- End of v1-7-0 --></p>
+
+<div class="v1-6-0">
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.6.0</code></pre></figure>
+
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
-<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install 
</span>mxnet-mkl</code></pre></figure>
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.6.0</code></pre></figure>
 
 </div>
 <p><!-- End of v1-6-0 --></p>
@@ -463,8 +493,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.5.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.5.1</code></pre></figure>
 
@@ -476,8 +507,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.4.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.4.1</code></pre></figure>
 
@@ -488,8 +520,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.3.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.3.1</code></pre></figure>
 
@@ -500,8 +533,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.2.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.2.1</code></pre></figure>
 
@@ -545,7 +579,7 @@ For MXNet 0.12.0:
 <p>You can then <a href="/get_started/validate_mxnet.html">validate your MXNet 
installation</a>.</p>
 
 <div style="text-align: center">
-    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.6.0.png";
 alt="pip packages" />
+    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.7.0.png";
 alt="pip packages" />
 </div>
 
 <p><strong>NOTES:</strong></p>
@@ -607,12 +641,18 @@ page</a>.</p>
 
 <p>Run the following command:</p>
 
-<div class="v1-6-0">
+<div class="v1-7-0">
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash"><span class="nv">$ </span>pip <span class="nb">install 
</span>mxnet-cu102</code></pre></figure>
 
 </div>
 <p><!-- End of v1-6-0 --></p>
 
+<div class="v1-6-0">
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash"><span class="nv">$ </span>pip <span class="nb">install 
</span>mxnet-cu102<span class="o">==</span>1.6.0</code></pre></figure>
+
+</div>
+<p><!-- End of v1-6-0 --></p>
+
 <div class="v1-5-1">
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash"><span class="nv">$ </span>pip <span class="nb">install 
</span>mxnet-cu101<span class="o">==</span>1.5.1</code></pre></figure>
 
@@ -670,7 +710,7 @@ page</a>.</p>
 <p>You can then <a href="/get_started/validate_mxnet.html">validate your MXNet 
installation</a>.</p>
 
 <div style="text-align: center">
-    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.6.0.png";
 alt="pip packages" />
+    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.7.0.png";
 alt="pip packages" />
 </div>
 
 <p><strong>NOTES:</strong></p>
@@ -803,14 +843,43 @@ page</a>.</p>
 
 <p>Run the following command:</p>
 
-<div class="v1-6-0">
+<div class="v1-7-0">
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet</code></pre></figure>
 
+Start from 1.7.0 release, oneDNN(previously known as: MKL-DNN/DNNL) is enabled
+in pip packages by default.
+
+oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
+performance library of basic building blocks for deep learning applications.
+The library is optimized for Intel Architecture Processors, Intel Processor
+Graphics and Xe architecture-based Graphics. Support for other architectures
+such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is
+experimental.
+
+oneDNN is intended for deep learning applications and framework developers
+interested in improving application performance on Intel CPUs and GPUs, more
+details can be found <a href="https://github.com/oneapi-src/oneDNN";>here</a>.
+
+You can find performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
+
+To install native MXNet without oneDNN, run the following command:
+
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install 
</span>mxnet-native</code></pre></figure>
+
+</div>
+<p><!-- End of v1-7-0 --></p>
+
+<div class="v1-6-0">
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.6.0</code></pre></figure>
+
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
-<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install 
</span>mxnet-mkl</code></pre></figure>
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.6.0</code></pre></figure>
 
 </div>
 <p><!-- End of v1-6-0 --></p>
@@ -819,8 +888,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.5.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.5.1</code></pre></figure>
 
@@ -832,8 +902,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.4.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.4.1</code></pre></figure>
 
@@ -844,8 +915,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.3.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.3.1</code></pre></figure>
 
@@ -856,8 +928,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.2.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.2.1</code></pre></figure>
 
@@ -901,7 +974,7 @@ For MXNet 0.12.0:
 <p>You can then <a href="/get_started/validate_mxnet.html">validate your MXNet 
installation</a>.</p>
 
 <div style="text-align: center">
-    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.6.0.png";
 alt="pip packages" />
+    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.7.0.png";
 alt="pip packages" />
 </div>
 
 <p><strong>NOTES:</strong></p>
@@ -1028,14 +1101,43 @@ page</a>.</p>
 
 <p>Run the following command:</p>
 
-<div class="v1-6-0">
+<div class="v1-7-0">
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet</code></pre></figure>
 
+Start from 1.7.0 release, oneDNN(previously known as: MKL-DNN/DNNL) is enabled
+in pip packages by default.
+
+oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
+performance library of basic building blocks for deep learning applications.
+The library is optimized for Intel Architecture Processors, Intel Processor
+Graphics and Xe architecture-based Graphics. Support for other architectures
+such as Arm* 64-bit Architecture (AArch64) and OpenPOWER* Power ISA (PPC64) is
+experimental.
+
+oneDNN is intended for deep learning applications and framework developers
+interested in improving application performance on Intel CPUs and GPUs, more
+details can be found <a href="https://github.com/oneapi-src/oneDNN";>here</a>.
+
+You can find performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
+
+To install native MXNet without oneDNN, run the following command:
+
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install 
</span>mxnet-native</code></pre></figure>
+
+</div>
+<p><!-- End of v1-7-0 --></p>
+
+<div class="v1-6-0">
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.6.0</code></pre></figure>
+
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
-<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install 
</span>mxnet-mkl</code></pre></figure>
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.6.0</code></pre></figure>
 
 </div>
 <p><!-- End of v1-6-0 --></p>
@@ -1044,8 +1146,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.5.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.5.1</code></pre></figure>
 
@@ -1057,8 +1160,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.4.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.4.1</code></pre></figure>
 
@@ -1069,8 +1173,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.3.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.3.1</code></pre></figure>
 
@@ -1081,8 +1186,9 @@ in the <a 
href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning guide</a>.
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span><span 
class="nv">mxnet</span><span class="o">==</span>1.2.1</code></pre></figure>
 
 MKL-DNN enabled pip packages are optimized for Intel hardware. You can find
-performance numbers
-in the <a href="https://mxnet.io/api/faq/perf#intel-cpu";>MXNet tuning 
guide</a>.
+performance numbers in the
+<a href="https://mxnet.apache.org/versions/1.6/api/faq/perf.html#intel-cpu";>
+MXNet tuning guide</a>.
 
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash">pip <span class="nb">install </span>mxnet-mkl<span 
class="o">==</span>1.2.1</code></pre></figure>
 
@@ -1126,7 +1232,7 @@ For MXNet 0.12.0:
 <p>You can then <a href="/get_started/validate_mxnet.html">validate your MXNet 
installation</a>.</p>
 
 <div style="text-align: center">
-    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.6.0.png";
 alt="pip packages" />
+    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.7.0.png";
 alt="pip packages" />
 </div>
 
 <p><strong>NOTES:</strong></p>
@@ -1162,12 +1268,18 @@ page</a>.</p>
 
 <p>Run the following command:</p>
 
-<div class="v1-6-0">
+<div class="v1-7-0">
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash"><span class="nv">$ </span>pip <span class="nb">install 
</span>mxnet-cu102</code></pre></figure>
 
 </div>
 <p><!-- End of v1-6-0 --></p>
 
+<div class="v1-6-0">
+<figure class="highlight"><pre><code class="language-bash" 
data-lang="bash"><span class="nv">$ </span>pip <span class="nb">install 
</span>mxnet-cu102<span class="o">==</span>1.6.0</code></pre></figure>
+
+</div>
+<p><!-- End of v1-6-0 --></p>
+
 <div class="v1-5-1">
 <figure class="highlight"><pre><code class="language-bash" 
data-lang="bash"><span class="nv">$ </span>pip <span class="nb">install 
</span>mxnet-cu101<span class="o">==</span>1.5.1</code></pre></figure>
 
@@ -1225,7 +1337,7 @@ page</a>.</p>
 <p>You can then <a href="/get_started/validate_mxnet.html">validate your MXNet 
installation</a>.</p>
 
 <div style="text-align: center">
-    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.6.0.png";
 alt="pip packages" />
+    <img 
src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.7.0.png";
 alt="pip packages" />
 </div>
 
 <p><strong>NOTES:</strong></p>

Reply via email to