This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 783f999 Publish triggered by CI
783f999 is described below
commit 783f999ac1de391dfbf1ecef28af4e017bbd95c5
Author: mxnet-ci <mxnet-ci>
AuthorDate: Mon Aug 24 00:44:02 2020 +0000
Publish triggered by CI
---
.../docs/tutorials/multi_threaded_inference.html | 20 +--
api/dev-guide/profiling.html | 18 +--
api/python/docs/_modules/mxnet/dlpack.html | 4 +-
.../docs/_modules/mxnet/ndarray/ndarray.html | 10 +-
api/python/docs/_modules/mxnet/profiler.html | 10 +-
api/python/docs/_modules/mxnet/symbol/symbol.html | 8 +-
api/python/docs/api/legacy/ndarray/ndarray.html | 152 +++------------------
api/python/docs/api/legacy/ndarray/op/index.html | 152 +++------------------
api/python/docs/api/legacy/symbol/op/index.html | 152 +++------------------
api/python/docs/api/legacy/symbol/symbol.html | 152 +++------------------
api/python/docs/genindex.html | 26 +---
api/python/docs/objects.inv | Bin 93309 -> 93186 bytes
api/python/docs/searchindex.js | 2 +-
date.txt | 1 -
feed.xml | 2 +-
15 files changed, 120 insertions(+), 589 deletions(-)
diff --git a/api/cpp/docs/tutorials/multi_threaded_inference.html
b/api/cpp/docs/tutorials/multi_threaded_inference.html
index f63a4cf..f81ffac 100644
--- a/api/cpp/docs/tutorials/multi_threaded_inference.html
+++ b/api/cpp/docs/tutorials/multi_threaded_inference.html
@@ -476,12 +476,12 @@ for MXNet users to do multi-threaded inference.</p>
* \brief create cached operator, allows to choose thread_safe version
* of cachedop
*/
-MXNET_DLL int MXCreateCachedOpEX(SymbolHandle handle,
- int num_flags,
- const char** keys,
- const char** vals,
- CachedOpHandle *out,
- bool thread_safe DEFAULT(false));
+MXNET_DLL int MXCreateCachedOp(SymbolHandle handle,
+ int num_flags,
+ const char** keys,
+ const char** vals,
+ CachedOpHandle *out,
+ bool thread_safe DEFAULT(false));
</code></pre></div></div>
<h2
id="multithreaded-inference-in-mxnet-with-c-api-and-cpp-package">Multithreaded
inference in MXNet with C API and CPP Package</h2>
@@ -560,8 +560,8 @@ This example requires a build with CUDA and CUDNN.</p>
<p><a
href="multi_threaded_inference.cc#L207-233">https://github.com/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L207-L233</a></p>
<p>The above code prepares <code
class="highlighter-rouge">flag_key_cstrs</code> and <code
class="highlighter-rouge">flag_val_cstrs</code> to be passed the Cached op.
-The C API call is made with <code
class="highlighter-rouge">MXCreateCachedOpEX</code>. This will lead to creation
of thread safe cached
-op since the <code class="highlighter-rouge">thread_safe</code> (which is the
last parameter to <code class="highlighter-rouge">MXCreateCachedOpEX</code>) is
set to
+The C API call is made with <code
class="highlighter-rouge">MXCreateCachedOp</code>. This will lead to creation
of thread safe cached
+op since the <code class="highlighter-rouge">thread_safe</code> (which is the
last parameter to <code class="highlighter-rouge">MXCreateCachedOp</code>) is
set to
true. When this is set to false, it will invoke CachedOp instead of
CachedOpThreadSafe.</p>
<h3 id="step-4-prepare-lambda-function-which-will-run-in-spawned-threads">Step
4: Prepare lambda function which will run in spawned threads</h3>
@@ -570,7 +570,7 @@ true. When this is set to false, it will invoke CachedOp
instead of CachedOpThre
<p>The above creates the lambda function taking the thread number as the
argument.
If <code class="highlighter-rouge">random_sleep</code> is set it will sleep
for a random number (secs) generated between 0 to 5 seconds.
-Following this, it invokes <code
class="highlighter-rouge">MXInvokeCachedOpEx</code>(from the hdl it determines
whether to invoke cached op threadsafe version or not).
+Following this, it invokes <code
class="highlighter-rouge">MXInvokeCachedOp</code>(from the hdl it determines
whether to invoke cached op threadsafe version or not).
When this is set to false, it will invoke CachedOp instead of
CachedOpThreadSafe.</p>
<h3
id="step-5-spawn-multiple-threads-and-wait-for-all-threads-to-complete">Step 5:
Spawn multiple threads and wait for all threads to complete</h3>
@@ -631,7 +631,7 @@ The other alternative is to wait in the thread on the
output ndarray and remove
<li>Bulking of ops is not supported.</li>
<li>This only supports inference use cases currently, training use cases are
not supported.</li>
<li>Graph rewrites with subgraph API currently not supported.</li>
- <li>There is currently no frontend API support to run multi threaded
inference. Users can use CreateCachedOpEX and InvokeCachedOp in combination with
+ <li>There is currently no frontend API support to run multi threaded
inference. Users can use CreateCachedOp and InvokeCachedOp in combination with
the CPP frontend to run multi-threaded inference as of today.</li>
<li>Multi threaded inference with threaded engine with Module/Symbolic API
and C Predict API are not currently supported.</li>
<li>Exception thrown with <code
class="highlighter-rouge">wait_to_read</code> in individual threads can cause
issues. Calling invoke from each thread and calling WaitAll after thread joins
should still work fine.</li>
diff --git a/api/dev-guide/profiling.html b/api/dev-guide/profiling.html
index 53e1637..4923e44 100644
--- a/api/dev-guide/profiling.html
+++ b/api/dev-guide/profiling.html
@@ -560,11 +560,11 @@ MXNET_C_API
=================
Name Total Count Time (ms) Min Time (ms)
Max Time (ms) Avg Time (ms)
---- ----------- --------- -------------
------------- -------------
-MXImperativeInvokeEx 2 0.3360 0.0990
0.2370 0.1680
+MXImperativeInvoke 2 0.3360 0.0990
0.2370 0.1680
MXNet C API Calls 17 0.2320 0.2160
0.2320 0.0080
MXNDArraySyncCopyFromCPU 1 0.1750 0.1750
0.1750 0.1750
-MXNDArrayCreateEx 1 0.1050 0.1050
0.1050 0.1050
-MXNDArrayGetShapeEx 11 0.0210 0.0000
0.0160 0.0019
+MXNDArrayCreate 1 0.1050 0.1050
0.1050 0.1050
+MXNDArrayGetShape 11 0.0210 0.0000
0.0160 0.0019
MXNDArrayWaitAll 1 0.0200 0.0200
0.0200 0.0200
MXNDArrayGetDType 1 0.0010 0.0010
0.0010 0.0010
MXNet C API Concurrency 34 0.0000 0.0000
0.0010 0.0000
@@ -594,11 +594,11 @@ The profiling data has captured info about interesting
functions that have execu
</thead>
<tbody>
<tr>
- <td><strong>MXImperativeInvokeEx</strong></td>
+ <td><strong>MXImperativeInvoke</strong></td>
<td>invokes an operator to perform the computation</td>
</tr>
<tr>
- <td><strong>MXNDArrayCreateEx</strong></td>
+ <td><strong>MXNDArrayCreate</strong></td>
<td>creates an ndarray</td>
</tr>
<tr>
@@ -697,7 +697,7 @@ The profiling data has captured info about interesting
functions that have execu
Here, the four red arrows show the important events in this sequence.</p>
<ol>
- <li>First, the <code class="highlighter-rouge">MXNDArrayCreateEx</code> is
called to physically allocate space to store the data and other necessary
attributes in the <code class="highlighter-rouge">ndarray</code> class.</li>
+ <li>First, the <code class="highlighter-rouge">MXNDArrayCreate</code> is
called to physically allocate space to store the data and other necessary
attributes in the <code class="highlighter-rouge">ndarray</code> class.</li>
<li>Then some support functions are called (<code
class="highlighter-rouge">MXNDArrayGetShape,</code> <code
class="highlighter-rouge">MXNDArrayGetDType</code>) while initialing the data
structure.</li>
<li>Finally the data is copied from the non-MXNet ndarray into the newly
prepared MXNet ndarray by the <code
class="highlighter-rouge">MXNDArraySyncCopyFromCPU</code> function.</li>
</ol>
@@ -708,9 +708,9 @@ Here, the four red arrows show the important events in this
sequence.</p>
Here you can see that the following sequence of events happen:</p>
<ol>
- <li><code class="highlighter-rouge">MXImperativeInvokeEx</code> is called
the first time to launch the diagonal operator from #3 (in our code
example).</li>
+ <li><code class="highlighter-rouge">MXImperativeInvoke</code> is called the
first time to launch the diagonal operator from #3 (in our code example).</li>
<li>Soon after that the actual <strong><code
class="highlighter-rouge">diag</code></strong> operator begins executing in
another thread.</li>
- <li>While that is happening, our main thread moves on and calls <code
class="highlighter-rouge">MXImperativeInvokeEx</code> again to launch the
<strong><code class="highlighter-rouge">sum</code></strong> operator. Just
like before, this returns without actually executing the operator and
continues.</li>
+ <li>While that is happening, our main thread moves on and calls <code
class="highlighter-rouge">MXImperativeInvoke</code> again to launch the
<strong><code class="highlighter-rouge">sum</code></strong> operator. Just
like before, this returns without actually executing the operator and
continues.</li>
<li>Lastly, the <code class="highlighter-rouge">MXNDArrayWaitAll</code> is
called as the main thread has progressed to #4 in our app. It will wait here
while all the computation finishes.</li>
</ol>
@@ -773,7 +773,7 @@ profiler.dump()
The first red box is the first run, and the 2nd smaller one is the 2nd run.
First off, we can see how much smaller the 2nd one is now without any of the
initialization routines. Here is a zoomed in view of just the 2nd run.</p>
<p><img src="/assets/img/dev_guide_profilling_7.png"
alt="dev_guide_profilling_7.png" />
-We still have the same sequence of events at the beginning to initialize the
MXNet ndarray (<code class="highlighter-rouge">MXNDArrayCreateEx</code>, <code
class="highlighter-rouge">MXNDArrayGetShape</code>, <code
class="highlighter-rouge">MXNDArrayGetDType</code>, <code
class="highlighter-rouge">MXNDArraySyncCopyFromCPU</code>). Then the
<strong><code class="highlighter-rouge">diag</code></strong> operator runs,
followed by the <strong><code class="highlighter-rouge">sum</code></strong>
[...]
+We still have the same sequence of events at the beginning to initialize the
MXNet ndarray (<code class="highlighter-rouge">MXNDArrayCreate</code>, <code
class="highlighter-rouge">MXNDArrayGetShape</code>, <code
class="highlighter-rouge">MXNDArrayGetDType</code>, <code
class="highlighter-rouge">MXNDArraySyncCopyFromCPU</code>). Then the
<strong><code class="highlighter-rouge">diag</code></strong> operator runs,
followed by the <strong><code class="highlighter-rouge">sum</code></strong> o
[...]
</div>
diff --git a/api/python/docs/_modules/mxnet/dlpack.html
b/api/python/docs/_modules/mxnet/dlpack.html
index c17943b..6f72e69 100644
--- a/api/python/docs/_modules/mxnet/dlpack.html
+++ b/api/python/docs/_modules/mxnet/dlpack.html
@@ -1285,7 +1285,7 @@
<span class="k">assert</span> <span class="n">ctypes</span><span
class="o">.</span><span class="n">pythonapi</span><span class="o">.</span><span
class="n">PyCapsule_IsValid</span><span class="p">(</span><span
class="n">dlpack</span><span class="p">,</span> <span
class="n">_c_str_dltensor</span><span class="p">),</span> <span
class="ne">ValueError</span><span class="p">(</span>
<span class="s1">'Invalid DLPack Tensor. DLTensor capsules can
be consumed only once.'</span><span class="p">)</span>
<span class="n">dlpack_handle</span> <span class="o">=</span> <span
class="n">ctypes</span><span class="o">.</span><span
class="n">c_void_p</span><span class="p">(</span><span
class="n">ctypes</span><span class="o">.</span><span
class="n">pythonapi</span><span class="o">.</span><span
class="n">PyCapsule_GetPointer</span><span class="p">(</span><span
class="n">dlpack</span><span class="p">,</span> <span
class="n">_c_str_dltensor</span><span class="p">))</span>
- <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayFromDLPackEx</span><span class="p">(</span><span
class="n">dlpack_handle</span><span class="p">,</span> <span
class="kc">False</span><span class="p">,</span> <span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span
class="n">handle</span><span class="p">)))</span>
+ <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayFromDLPack</span><span class="p">(</span><span
class="n">dlpack_handle</span><span class="p">,</span> <span
class="kc">False</span><span class="p">,</span> <span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span
class="n">handle</span><span class="p">)))</span>
<span class="c1"># Rename PyCapsule (DLPack)</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">pythonapi</span><span class="o">.</span><span
class="n">PyCapsule_SetName</span><span class="p">(</span><span
class="n">dlpack</span><span class="p">,</span> <span
class="n">_c_str_used_dltensor</span><span class="p">)</span>
<span class="c1"># delete the deleter of the old dlpack</span>
@@ -1366,7 +1366,7 @@
<span class="n">ndarray</span><span class="o">.</span><span
class="n">flags</span><span class="p">[</span><span
class="s1">'WRITEABLE'</span><span class="p">]</span> <span
class="o">=</span> <span class="kc">False</span>
<span class="n">c_obj</span> <span class="o">=</span> <span
class="n">_make_dl_managed_tensor</span><span class="p">(</span><span
class="n">ndarray</span><span class="p">)</span>
<span class="n">handle</span> <span class="o">=</span> <span
class="n">NDArrayHandle</span><span class="p">()</span>
- <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayFromDLPackEx</span><span class="p">(</span><span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span class="n">c_obj</span><span
class="p">),</span> <span class="kc">True</span><span class="p">,</span> <span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p" [...]
+ <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayFromDLPack</span><span class="p">(</span><span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span class="n">c_obj</span><span
class="p">),</span> <span class="kc">True</span><span class="p">,</span> <span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">( [...]
<span class="k">return</span> <span class="n">array_cls</span><span
class="p">(</span><span class="n">handle</span><span class="o">=</span><span
class="n">handle</span><span class="p">)</span>
<span class="k">return</span> <span class="n">from_numpy</span>
</pre></div>
diff --git a/api/python/docs/_modules/mxnet/ndarray/ndarray.html
b/api/python/docs/_modules/mxnet/ndarray/ndarray.html
index ab8891a..79848be 100644
--- a/api/python/docs/_modules/mxnet/ndarray/ndarray.html
+++ b/api/python/docs/_modules/mxnet/ndarray/ndarray.html
@@ -1369,7 +1369,7 @@
<span class="n">dtype_type</span> <span class="o">=</span> <span
class="n">np</span><span class="o">.</span><span class="n">dtype</span><span
class="p">(</span><span class="n">dtype</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">dtype_type</span> <span class="o">=</span> <span
class="n">np</span><span class="o">.</span><span class="n">dtype</span><span
class="p">(</span><span class="n">dtype</span><span class="p">)</span><span
class="o">.</span><span class="n">type</span>
- <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayCreateEx64</span><span class="p">(</span>
+ <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayCreate64</span><span class="p">(</span>
<span class="n">c_array_buf</span><span class="p">(</span><span
class="n">mx_int64</span><span class="p">,</span> <span
class="n">native_array</span><span class="p">(</span><span
class="s1">'q'</span><span class="p">,</span> <span
class="n">shape</span><span class="p">)),</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">c_int</span><span class="p">(</span><span class="nb">len</span><span
class="p">(</span><span class="n">shape</span><span class="p">)),</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">c_int</span><span class="p">(</span><span class="n">ctx</span><span
class="o">.</span><span class="n">device_typeid</span><span class="p">),</span>
@@ -1391,7 +1391,7 @@
<span class="n">dtype_type</span> <span class="o">=</span> <span
class="n">np</span><span class="o">.</span><span class="n">dtype</span><span
class="p">(</span><span class="n">dtype</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">dtype_type</span> <span class="o">=</span> <span
class="n">np</span><span class="o">.</span><span class="n">dtype</span><span
class="p">(</span><span class="n">dtype</span><span class="p">)</span><span
class="o">.</span><span class="n">type</span>
- <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayCreateEx</span><span class="p">(</span>
+ <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayCreate</span><span class="p">(</span>
<span class="n">c_array_buf</span><span class="p">(</span><span
class="n">mx_uint</span><span class="p">,</span> <span
class="n">native_array</span><span class="p">(</span><span
class="s1">'I'</span><span class="p">,</span> <span
class="n">shape</span><span class="p">)),</span>
<span class="n">mx_uint</span><span class="p">(</span><span
class="nb">len</span><span class="p">(</span><span class="n">shape</span><span
class="p">)),</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">c_int</span><span class="p">(</span><span class="n">ctx</span><span
class="o">.</span><span class="n">device_typeid</span><span class="p">),</span>
@@ -1404,7 +1404,7 @@
<span class="k">def</span> <span class="nf">_new_from_shared_mem</span><span
class="p">(</span><span class="n">shared_pid</span><span class="p">,</span>
<span class="n">shared_id</span><span class="p">,</span> <span
class="n">shape</span><span class="p">,</span> <span
class="n">dtype</span><span class="p">):</span>
<span class="n">hdl</span> <span class="o">=</span> <span
class="n">NDArrayHandle</span><span class="p">()</span>
- <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayCreateFromSharedMemEx</span><span class="p">(</span>
+ <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayCreateFromSharedMem</span><span class="p">(</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">c_int</span><span class="p">(</span><span
class="n">shared_pid</span><span class="p">),</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">c_int</span><span class="p">(</span><span
class="n">shared_id</span><span class="p">),</span>
<span class="n">c_array</span><span class="p">(</span><span
class="n">mx_int</span><span class="p">,</span> <span
class="n">shape</span><span class="p">),</span>
@@ -3612,11 +3612,11 @@
<span class="n">ndim</span> <span class="o">=</span> <span
class="n">mx_int</span><span class="p">()</span>
<span class="k">if</span> <span class="n">_int64_enabled</span><span
class="p">():</span>
<span class="n">pdata</span> <span class="o">=</span> <span
class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">mx_int64</span><span class="p">)()</span>
- <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayGetShapeEx64</span><span class="p">(</span>
+ <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayGetShape64</span><span class="p">(</span>
<span class="bp">self</span><span class="o">.</span><span
class="n">handle</span><span class="p">,</span> <span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span class="n">ndim</span><span
class="p">),</span> <span class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span class="n">pdata</span><span
class="p">)))</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">pdata</span> <span class="o">=</span> <span
class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">mx_int</span><span class="p">)()</span>
- <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayGetShapeEx</span><span class="p">(</span>
+ <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXNDArrayGetShape</span><span class="p">(</span>
<span class="bp">self</span><span class="o">.</span><span
class="n">handle</span><span class="p">,</span> <span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span class="n">ndim</span><span
class="p">),</span> <span class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span class="n">pdata</span><span
class="p">)))</span>
<span class="k">if</span> <span class="n">ndim</span><span
class="o">.</span><span class="n">value</span> <span class="o">==</span> <span
class="o">-</span><span class="mi">1</span><span class="p">:</span>
<span class="k">return</span> <span class="kc">None</span>
diff --git a/api/python/docs/_modules/mxnet/profiler.html
b/api/python/docs/_modules/mxnet/profiler.html
index e18b6ec..203e9c39 100644
--- a/api/python/docs/_modules/mxnet/profiler.html
+++ b/api/python/docs/_modules/mxnet/profiler.html
@@ -1371,11 +1371,11 @@
<span class="s2">"Invalid value provided for ascending:
</span><span class="si">{0}</span><span class="s2">. Support: False,
True"</span><span class="o">.</span><span class="n">format</span><span
class="p">(</span><span class="n">ascending</span><span class="p">)</span>
<span class="k">assert</span> <span class="n">reset</span> <span
class="ow">in</span> <span class="n">reset_to_int</span><span
class="o">.</span><span class="n">keys</span><span class="p">(),</span>\
<span class="s2">"Invalid value provided for reset:
</span><span class="si">{0}</span><span class="s2">. Support: False,
True"</span><span class="o">.</span><span class="n">format</span><span
class="p">(</span><span class="n">reset</span><span class="p">)</span>
- <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXAggregateProfileStatsPrintEx</span><span class="p">(</span><span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span
class="n">debug_str</span><span class="p">),</span>
- <span
class="n">reset_to_int</span><span class="p">[</span><span
class="n">reset</span><span class="p">],</span>
- <span
class="n">format_to_int</span><span class="p">[</span><span
class="nb">format</span><span class="p">],</span>
- <span
class="n">sort_by_to_int</span><span class="p">[</span><span
class="n">sort_by</span><span class="p">],</span>
- <span
class="n">asc_to_int</span><span class="p">[</span><span
class="n">ascending</span><span class="p">]))</span>
+ <span class="n">check_call</span><span class="p">(</span><span
class="n">_LIB</span><span class="o">.</span><span
class="n">MXAggregateProfileStatsPrint</span><span class="p">(</span><span
class="n">ctypes</span><span class="o">.</span><span
class="n">byref</span><span class="p">(</span><span
class="n">debug_str</span><span class="p">),</span>
+ <span
class="n">reset_to_int</span><span class="p">[</span><span
class="n">reset</span><span class="p">],</span>
+ <span
class="n">format_to_int</span><span class="p">[</span><span
class="nb">format</span><span class="p">],</span>
+ <span
class="n">sort_by_to_int</span><span class="p">[</span><span
class="n">sort_by</span><span class="p">],</span>
+ <span
class="n">asc_to_int</span><span class="p">[</span><span
class="n">ascending</span><span class="p">]))</span>
<span class="k">return</span> <span class="n">py_str</span><span
class="p">(</span><span class="n">debug_str</span><span class="o">.</span><span
class="n">value</span><span class="p">)</span></div>
diff --git a/api/python/docs/_modules/mxnet/symbol/symbol.html
b/api/python/docs/_modules/mxnet/symbol/symbol.html
index ef7133f..71d8c64 100644
--- a/api/python/docs/_modules/mxnet/symbol/symbol.html
+++ b/api/python/docs/_modules/mxnet/symbol/symbol.html
@@ -2428,9 +2428,9 @@
<span class="n">out_shape_data</span> <span class="o">=</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">mx_int64</span><span class="p">))()</span>
<span class="n">aux_shape_data</span> <span class="o">=</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">mx_int64</span><span class="p">))()</span>
<span class="k">if</span> <span class="n">partial</span><span
class="p">:</span>
- <span class="n">infer_func</span> <span class="o">=</span>
<span class="n">_LIB</span><span class="o">.</span><span
class="n">MXSymbolInferShapePartialEx64</span>
+ <span class="n">infer_func</span> <span class="o">=</span>
<span class="n">_LIB</span><span class="o">.</span><span
class="n">MXSymbolInferShapePartial64</span>
<span class="k">else</span><span class="p">:</span>
- <span class="n">infer_func</span> <span class="o">=</span>
<span class="n">_LIB</span><span class="o">.</span><span
class="n">MXSymbolInferShapeEx64</span>
+ <span class="n">infer_func</span> <span class="o">=</span>
<span class="n">_LIB</span><span class="o">.</span><span
class="n">MXSymbolInferShape64</span>
<span class="n">check_call</span><span class="p">(</span><span
class="n">infer_func</span><span class="p">(</span>
<span class="bp">self</span><span class="o">.</span><span
class="n">handle</span><span class="p">,</span>
<span class="n">mx_uint</span><span class="p">(</span><span
class="nb">len</span><span class="p">(</span><span class="n">indptr</span><span
class="p">)</span> <span class="o">-</span> <span class="mi">1</span><span
class="p">),</span>
@@ -2457,9 +2457,9 @@
<span class="n">out_shape_data</span> <span class="o">=</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">mx_int</span><span class="p">))()</span>
<span class="n">aux_shape_data</span> <span class="o">=</span>
<span class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">ctypes</span><span class="o">.</span><span
class="n">POINTER</span><span class="p">(</span><span
class="n">mx_int</span><span class="p">))()</span>
<span class="k">if</span> <span class="n">partial</span><span
class="p">:</span>
- <span class="n">infer_func</span> <span class="o">=</span>
<span class="n">_LIB</span><span class="o">.</span><span
class="n">MXSymbolInferShapePartialEx</span>
+ <span class="n">infer_func</span> <span class="o">=</span>
<span class="n">_LIB</span><span class="o">.</span><span
class="n">MXSymbolInferShapePartial</span>
<span class="k">else</span><span class="p">:</span>
- <span class="n">infer_func</span> <span class="o">=</span>
<span class="n">_LIB</span><span class="o">.</span><span
class="n">MXSymbolInferShapeEx</span>
+ <span class="n">infer_func</span> <span class="o">=</span>
<span class="n">_LIB</span><span class="o">.</span><span
class="n">MXSymbolInferShape</span>
<span class="n">check_call</span><span class="p">(</span><span
class="n">infer_func</span><span class="p">(</span>
<span class="bp">self</span><span class="o">.</span><span
class="n">handle</span><span class="p">,</span>
<span class="n">mx_uint</span><span class="p">(</span><span
class="nb">len</span><span class="p">(</span><span class="n">indptr</span><span
class="p">)</span> <span class="o">-</span> <span class="mi">1</span><span
class="p">),</span>
diff --git a/api/python/docs/api/legacy/ndarray/ndarray.html
b/api/python/docs/api/legacy/ndarray/ndarray.html
index b46507f..444dc6c 100644
--- a/api/python/docs/api/legacy/ndarray/ndarray.html
+++ b/api/python/docs/api/legacy/ndarray/ndarray.html
@@ -1244,76 +1244,70 @@ Show Source
<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Convolution" title="mxnet.ndarray.Convolution"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Convolution</span></code></a>([data, weight, bias, kernel,
…])</p></td>
<td><p>Compute <em>N</em>-D convolution on <em>(N+2)</em>-D input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Convolution_v1" title="mxnet.ndarray.Convolution_v1"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Convolution_v1</span></code></a>([data, weight, bias, kernel,
…])</p></td>
-<td><p>This operator is DEPRECATED.</p></td>
-</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Correlation" title="mxnet.ndarray.Correlation"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Correlation</span></code></a>([data1, data2, kernel_size,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Correlation" title="mxnet.ndarray.Correlation"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Correlation</span></code></a>([data1, data2, kernel_size,
…])</p></td>
<td><p>Applies correlation to inputs.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Crop" title="mxnet.ndarray.Crop"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Crop</span></code></a>(*data, **kwargs)</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Crop" title="mxnet.ndarray.Crop"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Crop</span></code></a>(*data, **kwargs)</p></td>
<td><p><div class="admonition note">
<p class="admonition-title">Note</p>
<p><cite>Crop</cite> is deprecated. Use <cite>slice</cite> instead.</p>
</div>
</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Custom" title="mxnet.ndarray.Custom"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Custom</span></code></a>(*data, **kwargs)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Custom" title="mxnet.ndarray.Custom"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Custom</span></code></a>(*data, **kwargs)</p></td>
<td><p>Apply a custom operator implemented in a frontend language (like
Python).</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Deconvolution" title="mxnet.ndarray.Deconvolution"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Deconvolution</span></code></a>([data, weight, bias, kernel,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Deconvolution" title="mxnet.ndarray.Deconvolution"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Deconvolution</span></code></a>([data, weight, bias, kernel,
…])</p></td>
<td><p>Computes 1D or 2D transposed convolution (aka fractionally strided
convolution) of the input tensor.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Dropout" title="mxnet.ndarray.Dropout"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Dropout</span></code></a>([data, p, mode, axes, cudnn_off,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Dropout" title="mxnet.ndarray.Dropout"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Dropout</span></code></a>([data, p, mode, axes, cudnn_off,
…])</p></td>
<td><p>Applies dropout operation to input array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.ElementWiseSum" title="mxnet.ndarray.ElementWiseSum"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">ElementWiseSum</span></code></a>(*args, **kwargs)</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.ElementWiseSum" title="mxnet.ndarray.ElementWiseSum"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">ElementWiseSum</span></code></a>(*args, **kwargs)</p></td>
<td><p>Adds all input arguments element-wise.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Embedding" title="mxnet.ndarray.Embedding"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Embedding</span></code></a>([data, weight, input_dim, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Embedding" title="mxnet.ndarray.Embedding"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Embedding</span></code></a>([data, weight, input_dim, …])</p></td>
<td><p>Maps integer indices to vector representations (embeddings).</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Flatten" title="mxnet.ndarray.Flatten"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Flatten</span></code></a>([data, out, name])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Flatten" title="mxnet.ndarray.Flatten"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Flatten</span></code></a>([data, out, name])</p></td>
<td><p>Flattens the input array into a 2-D array by collapsing the higher
dimensions.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.FullyConnected" title="mxnet.ndarray.FullyConnected"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">FullyConnected</span></code></a>([data, weight, bias, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.FullyConnected" title="mxnet.ndarray.FullyConnected"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">FullyConnected</span></code></a>([data, weight, bias, …])</p></td>
<td><p>Applies a linear transformation: <span class="math notranslate
nohighlight">\(Y = XW^T + b\)</span>.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.GridGenerator" title="mxnet.ndarray.GridGenerator"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GridGenerator</span></code></a>([data, transform_type, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.GridGenerator" title="mxnet.ndarray.GridGenerator"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GridGenerator</span></code></a>([data, transform_type, …])</p></td>
<td><p>Generates 2D sampling grid for bilinear sampling.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.GroupNorm" title="mxnet.ndarray.GroupNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GroupNorm</span></code></a>([data, gamma, beta, num_groups,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.GroupNorm" title="mxnet.ndarray.GroupNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GroupNorm</span></code></a>([data, gamma, beta, num_groups,
…])</p></td>
<td><p>Group normalization.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.IdentityAttachKLSparseReg"
title="mxnet.ndarray.IdentityAttachKLSparseReg"><code class="xref py py-obj
docutils literal notranslate"><span
class="pre">IdentityAttachKLSparseReg</span></code></a>([data, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.IdentityAttachKLSparseReg"
title="mxnet.ndarray.IdentityAttachKLSparseReg"><code class="xref py py-obj
docutils literal notranslate"><span
class="pre">IdentityAttachKLSparseReg</span></code></a>([data, …])</p></td>
<td><p>Apply a sparse regularization to the output a sigmoid activation
function.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.InstanceNorm" title="mxnet.ndarray.InstanceNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">InstanceNorm</span></code></a>([data, gamma, beta, eps, out,
name])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.InstanceNorm" title="mxnet.ndarray.InstanceNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">InstanceNorm</span></code></a>([data, gamma, beta, eps, out,
name])</p></td>
<td><p>Applies instance normalization to the n-dimensional input
array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.L2Normalization"
title="mxnet.ndarray.L2Normalization"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">L2Normalization</span></code></a>([data,
eps, mode, out, name])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.L2Normalization"
title="mxnet.ndarray.L2Normalization"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">L2Normalization</span></code></a>([data,
eps, mode, out, name])</p></td>
<td><p>Normalize the input array using the L2 norm.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.LRN" title="mxnet.ndarray.LRN"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">LRN</span></code></a>([data,
alpha, beta, knorm, nsize, out, name])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.LRN" title="mxnet.ndarray.LRN"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">LRN</span></code></a>([data,
alpha, beta, knorm, nsize, out, name])</p></td>
<td><p>Applies local response normalization to the input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.LayerNorm" title="mxnet.ndarray.LayerNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LayerNorm</span></code></a>([data, gamma, beta, axis, eps,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.LayerNorm" title="mxnet.ndarray.LayerNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LayerNorm</span></code></a>([data, gamma, beta, axis, eps,
…])</p></td>
<td><p>Layer normalization.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.LeakyReLU" title="mxnet.ndarray.LeakyReLU"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LeakyReLU</span></code></a>([data, gamma, act_type, slope,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.LeakyReLU" title="mxnet.ndarray.LeakyReLU"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LeakyReLU</span></code></a>([data, gamma, act_type, slope,
…])</p></td>
<td><p>Applies Leaky rectified linear unit activation element-wise to the
input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.MakeLoss" title="mxnet.ndarray.MakeLoss"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">MakeLoss</span></code></a>([data, grad_scale, valid_thresh,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.MakeLoss" title="mxnet.ndarray.MakeLoss"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">MakeLoss</span></code></a>([data, grad_scale, valid_thresh,
…])</p></td>
<td><p>Make your own loss function in network construction.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Pad" title="mxnet.ndarray.Pad"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">Pad</span></code></a>([data,
mode, pad_width, constant_value, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Pad" title="mxnet.ndarray.Pad"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">Pad</span></code></a>([data,
mode, pad_width, constant_value, …])</p></td>
<td><p>Pads an input array with a constant or edge values of the
array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.Pooling" title="mxnet.ndarray.Pooling"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Pooling</span></code></a>([data, kernel, pool_type, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Pooling" title="mxnet.ndarray.Pooling"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Pooling</span></code></a>([data, kernel, pool_type, …])</p></td>
<td><p>Performs pooling on the input.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.Pooling_v1" title="mxnet.ndarray.Pooling_v1"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Pooling_v1</span></code></a>([data, kernel, pool_type, …])</p></td>
-<td><p>This operator is DEPRECATED.</p></td>
-</tr>
<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.RNN" title="mxnet.ndarray.RNN"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">RNN</span></code></a>([data,
parameters, state, state_cell, …])</p></td>
<td><p>Applies recurrent layers to input data.</p></td>
</tr>
@@ -2664,48 +2658,6 @@ default layout: NCW for 1d, NCHW for 2d and NCDHW for
3d.NHWC and NDHWC are only
</dd></dl>
<dl class="function">
-<dt id="mxnet.ndarray.Convolution_v1">
-<code class="sig-prename descclassname">mxnet.ndarray.</code><code
class="sig-name descname">Convolution_v1</code><span
class="sig-paren">(</span><em class="sig-param">data=None</em>, <em
class="sig-param">weight=None</em>, <em class="sig-param">bias=None</em>, <em
class="sig-param">kernel=_Null</em>, <em class="sig-param">stride=_Null</em>,
<em class="sig-param">dilate=_Null</em>, <em class="sig-param">pad=_Null</em>,
<em class="sig-param">num_filter=_Null</em>, <em class="sig-param">nu [...]
-<dd><p>This operator is DEPRECATED. Apply convolution to input then add a
bias.</p>
-<dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>data</strong> (<a class="reference internal"
href="#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – Input data to the
ConvolutionV1Op.</p></li>
-<li><p><strong>weight</strong> (<a class="reference internal"
href="#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – Weight matrix.</p></li>
-<li><p><strong>bias</strong> (<a class="reference internal"
href="#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – Bias parameter.</p></li>
-<li><p><strong>kernel</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>, </em><em>required</em>)
– convolution kernel size: (h, w) or (d, h, w)</p></li>
-<li><p><strong>stride</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
convolution stride: (h, w) or (d, h, w)</p></li>
-<li><p><strong>dilate</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
convolution dilate: (h, w) or (d, h, w)</p></li>
-<li><p><strong>pad</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) – pad
for convolution: (h, w) or (d, h, w)</p></li>
-<li><p><strong>num_filter</strong> (<em>int</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>required</em>) –
convolution filter(channel) number</p></li>
-<li><p><strong>num_group</strong> (<em>int</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>optional</em><em>,
</em><em>default=1</em>) – Number of group partitions. Equivalent to slicing
input into num_group
-partitions, apply convolution on each, then concatenate the results</p></li>
-<li><p><strong>workspace</strong> (<em>long</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>optional</em><em>,
</em><em>default=1024</em>) – Maximum temporary workspace allowed for
convolution (MB).This parameter determines the effective batch size of the
convolution kernel, which may be smaller than the given batch size. Also, the
workspace will be automatically enlarged to make sure that we can run the
kernel with batch_size=1</p></li>
-<li><p><strong>no_bias</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Whether to disable bias
parameter.</p></li>
-<li><p><strong>cudnn_tune</strong> (<em>{None</em><em>,
</em><em>'fastest'</em><em>, </em><em>'limited_workspace'</em><em>,
</em><em>'off'}</em><em>,</em><em>optional</em><em>,
</em><em>default='None'</em>) – Whether to pick convolution algo by running
performance test.
-Leads to higher startup time but may give faster speed. Options are:
-‘off’: no tuning
-‘limited_workspace’: run test and pick the fastest algorithm that doesn’t
exceed workspace limit.
-‘fastest’: pick the fastest algorithm and ignore workspace limit.
-If set to None (default), behavior is determined by environment
-variable MXNET_CUDNN_AUTOTUNE_DEFAULT: 0 for off,
-1 for limited workspace (default), 2 for fastest.</p></li>
-<li><p><strong>cudnn_off</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Turn off cudnn for this
layer.</p></li>
-<li><p><strong>layout</strong> (<em>{None</em><em>, </em><em>'NCDHW'</em><em>,
</em><em>'NCHW'</em><em>, </em><em>'NDHWC'</em><em>,
</em><em>'NHWC'}</em><em>,</em><em>optional</em><em>,
</em><em>default='None'</em>) – Set layout for input, output and weight. Empty
for
-default layout: NCHW for 2d and NCDHW for 3d.</p></li>
-<li><p><strong>out</strong> (<a class="reference internal"
href="#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a><em>, </em><em>optional</em>)
– The output NDArray to hold the result.</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p><strong>out</strong> – The output of this
function.</p>
-</dd>
-<dt class="field-odd">Return type</dt>
-<dd class="field-odd"><p><a class="reference internal"
href="#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray">NDArray</a> or list
of NDArrays</p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="function">
<dt id="mxnet.ndarray.Correlation">
<code class="sig-prename descclassname">mxnet.ndarray.</code><code
class="sig-name descname">Correlation</code><span class="sig-paren">(</span><em
class="sig-param">data1=None</em>, <em class="sig-param">data2=None</em>, <em
class="sig-param">kernel_size=_Null</em>, <em
class="sig-param">max_displacement=_Null</em>, <em
class="sig-param">stride1=_Null</em>, <em class="sig-param">stride2=_Null</em>,
<em class="sig-param">pad_size=_Null</em>, <em
class="sig-param">is_multiply=_Null</em>, < [...]
<dd><p>Applies correlation to inputs.</p>
@@ -3631,70 +3583,6 @@ default layout: NCW for 1d, NCHW for 2d and NCDHW for
3d.</p></li>
</dd></dl>
<dl class="function">
-<dt id="mxnet.ndarray.Pooling_v1">
-<code class="sig-prename descclassname">mxnet.ndarray.</code><code
class="sig-name descname">Pooling_v1</code><span class="sig-paren">(</span><em
class="sig-param">data=None</em>, <em class="sig-param">kernel=_Null</em>, <em
class="sig-param">pool_type=_Null</em>, <em
class="sig-param">global_pool=_Null</em>, <em
class="sig-param">pooling_convention=_Null</em>, <em
class="sig-param">stride=_Null</em>, <em class="sig-param">pad=_Null</em>, <em
class="sig-param">out=None</em>, <em class="s [...]
-<dd><p>This operator is DEPRECATED.
-Perform pooling on the input.</p>
-<p>The shapes for 2-D pooling is</p>
-<ul>
-<li><p><strong>data</strong>: <em>(batch_size, channel, height,
width)</em></p></li>
-<li><p><strong>out</strong>: <em>(batch_size, num_filter, out_height,
out_width)</em>, with:</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">out_height</span> <span
class="o">=</span> <span class="n">f</span><span class="p">(</span><span
class="n">height</span><span class="p">,</span> <span
class="n">kernel</span><span class="p">[</span><span class="mi">0</span><span
class="p">],</span> <span class="n">pad</span><span class="p">[</span><span
class="mi">0</span><span class="p">],</span> <span class="n">stride</span><span
class=" [...]
-<span class="n">out_width</span> <span class="o">=</span> <span
class="n">f</span><span class="p">(</span><span class="n">width</span><span
class="p">,</span> <span class="n">kernel</span><span class="p">[</span><span
class="mi">1</span><span class="p">],</span> <span class="n">pad</span><span
class="p">[</span><span class="mi">1</span><span class="p">],</span> <span
class="n">stride</span><span class="p">[</span><span class="mi">1</span><span
class="p">])</span>
-</pre></div>
-</div>
-</li>
-</ul>
-<p>The definition of <em>f</em> depends on <code class="docutils literal
notranslate"><span class="pre">pooling_convention</span></code>, which has two
options:</p>
-<ul>
-<li><p><strong>valid</strong> (default):</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">f</span><span
class="p">(</span><span class="n">x</span><span class="p">,</span> <span
class="n">k</span><span class="p">,</span> <span class="n">p</span><span
class="p">,</span> <span class="n">s</span><span class="p">)</span> <span
class="o">=</span> <span class="n">floor</span><span class="p">((</span><span
class="n">x</span><span class="o">+</span><span class="mi">2</span><span
class=" [...]
-</pre></div>
-</div>
-</li>
-<li><p><strong>full</strong>, which is compatible with Caffe:</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">f</span><span
class="p">(</span><span class="n">x</span><span class="p">,</span> <span
class="n">k</span><span class="p">,</span> <span class="n">p</span><span
class="p">,</span> <span class="n">s</span><span class="p">)</span> <span
class="o">=</span> <span class="n">ceil</span><span class="p">((</span><span
class="n">x</span><span class="o">+</span><span class="mi">2</span><span
class="o [...]
-</pre></div>
-</div>
-</li>
-</ul>
-<p>But <code class="docutils literal notranslate"><span
class="pre">global_pool</span></code> is set to be true, then do a global
pooling, namely reset
-<code class="docutils literal notranslate"><span
class="pre">kernel=(height,</span> <span class="pre">width)</span></code>.</p>
-<p>Three pooling options are supported by <code class="docutils literal
notranslate"><span class="pre">pool_type</span></code>:</p>
-<ul class="simple">
-<li><p><strong>avg</strong>: average pooling</p></li>
-<li><p><strong>max</strong>: max pooling</p></li>
-<li><p><strong>sum</strong>: sum pooling</p></li>
-</ul>
-<p>1-D pooling is special case of 2-D pooling with <em>weight=1</em> and
-<em>kernel[1]=1</em>.</p>
-<p>For 3-D pooling, an additional <em>depth</em> dimension is added before
-<em>height</em>. Namely the input data will have shape <em>(batch_size,
channel, depth,
-height, width)</em>.</p>
-<p>Defined in /work/mxnet/src/operator/pooling_v1.cc:L104</p>
-<dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>data</strong> (<a class="reference internal"
href="#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – Input data to the pooling
operator.</p></li>
-<li><p><strong>kernel</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
pooling kernel size: (y, x) or (d, y, x)</p></li>
-<li><p><strong>pool_type</strong> (<em>{'avg'</em><em>,
</em><em>'max'</em><em>, </em><em>'sum'}</em><em>,</em><em>optional</em><em>,
</em><em>default='max'</em>) – Pooling type to be applied.</p></li>
-<li><p><strong>global_pool</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Ignore kernel size, do
global pooling based on current input feature map.</p></li>
-<li><p><strong>pooling_convention</strong> (<em>{'full'</em><em>,
</em><em>'valid'}</em><em>,</em><em>optional</em><em>,
</em><em>default='valid'</em>) – Pooling convention to be applied.</p></li>
-<li><p><strong>stride</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
stride: for pooling (y, x) or (d, y, x)</p></li>
-<li><p><strong>pad</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) – pad
for pooling: (y, x) or (d, y, x)</p></li>
-<li><p><strong>out</strong> (<a class="reference internal"
href="#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a><em>, </em><em>optional</em>)
– The output NDArray to hold the result.</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p><strong>out</strong> – The output of this
function.</p>
-</dd>
-<dt class="field-odd">Return type</dt>
-<dd class="field-odd"><p><a class="reference internal"
href="#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray">NDArray</a> or list
of NDArrays</p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="function">
<dt id="mxnet.ndarray.RNN">
<code class="sig-prename descclassname">mxnet.ndarray.</code><code
class="sig-name descname">RNN</code><span class="sig-paren">(</span><em
class="sig-param">data=None</em>, <em class="sig-param">parameters=None</em>,
<em class="sig-param">state=None</em>, <em
class="sig-param">state_cell=None</em>, <em
class="sig-param">sequence_length=None</em>, <em
class="sig-param">state_size=_Null</em>, <em
class="sig-param">num_layers=_Null</em>, <em
class="sig-param">bidirectional=_Null</em>, <em c [...]
<dd><p>Applies recurrent layers to input data. Currently, vanilla RNN, LSTM
and GRU are
diff --git a/api/python/docs/api/legacy/ndarray/op/index.html
b/api/python/docs/api/legacy/ndarray/op/index.html
index 8aff582..1b78d8f 100644
--- a/api/python/docs/api/legacy/ndarray/op/index.html
+++ b/api/python/docs/api/legacy/ndarray/op/index.html
@@ -1241,76 +1241,70 @@ Show Source
<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Convolution" title="mxnet.ndarray.op.Convolution"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Convolution</span></code></a>([data, weight, bias, kernel,
…])</p></td>
<td><p>Compute <em>N</em>-D convolution on <em>(N+2)</em>-D input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Convolution_v1"
title="mxnet.ndarray.op.Convolution_v1"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">Convolution_v1</span></code></a>([data,
weight, bias, kernel, …])</p></td>
-<td><p>This operator is DEPRECATED.</p></td>
-</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Correlation" title="mxnet.ndarray.op.Correlation"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Correlation</span></code></a>([data1, data2, kernel_size,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Correlation" title="mxnet.ndarray.op.Correlation"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Correlation</span></code></a>([data1, data2, kernel_size,
…])</p></td>
<td><p>Applies correlation to inputs.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Crop" title="mxnet.ndarray.op.Crop"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Crop</span></code></a>(*data, **kwargs)</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Crop" title="mxnet.ndarray.op.Crop"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Crop</span></code></a>(*data, **kwargs)</p></td>
<td><p><div class="admonition note">
<p class="admonition-title">Note</p>
<p><cite>Crop</cite> is deprecated. Use <cite>slice</cite> instead.</p>
</div>
</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Custom" title="mxnet.ndarray.op.Custom"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Custom</span></code></a>(*data, **kwargs)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Custom" title="mxnet.ndarray.op.Custom"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Custom</span></code></a>(*data, **kwargs)</p></td>
<td><p>Apply a custom operator implemented in a frontend language (like
Python).</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Deconvolution"
title="mxnet.ndarray.op.Deconvolution"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">Deconvolution</span></code></a>([data,
weight, bias, kernel, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Deconvolution"
title="mxnet.ndarray.op.Deconvolution"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">Deconvolution</span></code></a>([data,
weight, bias, kernel, …])</p></td>
<td><p>Computes 1D or 2D transposed convolution (aka fractionally strided
convolution) of the input tensor.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Dropout" title="mxnet.ndarray.op.Dropout"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Dropout</span></code></a>([data, p, mode, axes, cudnn_off,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Dropout" title="mxnet.ndarray.op.Dropout"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Dropout</span></code></a>([data, p, mode, axes, cudnn_off,
…])</p></td>
<td><p>Applies dropout operation to input array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.ElementWiseSum"
title="mxnet.ndarray.op.ElementWiseSum"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">ElementWiseSum</span></code></a>(*args,
**kwargs)</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.ElementWiseSum"
title="mxnet.ndarray.op.ElementWiseSum"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">ElementWiseSum</span></code></a>(*args,
**kwargs)</p></td>
<td><p>Adds all input arguments element-wise.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Embedding" title="mxnet.ndarray.op.Embedding"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Embedding</span></code></a>([data, weight, input_dim, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Embedding" title="mxnet.ndarray.op.Embedding"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Embedding</span></code></a>([data, weight, input_dim, …])</p></td>
<td><p>Maps integer indices to vector representations (embeddings).</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Flatten" title="mxnet.ndarray.op.Flatten"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Flatten</span></code></a>([data, out, name])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Flatten" title="mxnet.ndarray.op.Flatten"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Flatten</span></code></a>([data, out, name])</p></td>
<td><p>Flattens the input array into a 2-D array by collapsing the higher
dimensions.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.FullyConnected"
title="mxnet.ndarray.op.FullyConnected"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">FullyConnected</span></code></a>([data,
weight, bias, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.FullyConnected"
title="mxnet.ndarray.op.FullyConnected"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">FullyConnected</span></code></a>([data,
weight, bias, …])</p></td>
<td><p>Applies a linear transformation: <span class="math notranslate
nohighlight">\(Y = XW^T + b\)</span>.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.GridGenerator"
title="mxnet.ndarray.op.GridGenerator"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">GridGenerator</span></code></a>([data,
transform_type, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.GridGenerator"
title="mxnet.ndarray.op.GridGenerator"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">GridGenerator</span></code></a>([data,
transform_type, …])</p></td>
<td><p>Generates 2D sampling grid for bilinear sampling.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.GroupNorm" title="mxnet.ndarray.op.GroupNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GroupNorm</span></code></a>([data, gamma, beta, num_groups,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.GroupNorm" title="mxnet.ndarray.op.GroupNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GroupNorm</span></code></a>([data, gamma, beta, num_groups,
…])</p></td>
<td><p>Group normalization.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.IdentityAttachKLSparseReg"
title="mxnet.ndarray.op.IdentityAttachKLSparseReg"><code class="xref py py-obj
docutils literal notranslate"><span
class="pre">IdentityAttachKLSparseReg</span></code></a>([data, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.IdentityAttachKLSparseReg"
title="mxnet.ndarray.op.IdentityAttachKLSparseReg"><code class="xref py py-obj
docutils literal notranslate"><span
class="pre">IdentityAttachKLSparseReg</span></code></a>([data, …])</p></td>
<td><p>Apply a sparse regularization to the output a sigmoid activation
function.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.InstanceNorm"
title="mxnet.ndarray.op.InstanceNorm"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">InstanceNorm</span></code></a>([data,
gamma, beta, eps, out, name])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.InstanceNorm"
title="mxnet.ndarray.op.InstanceNorm"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">InstanceNorm</span></code></a>([data,
gamma, beta, eps, out, name])</p></td>
<td><p>Applies instance normalization to the n-dimensional input
array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.L2Normalization"
title="mxnet.ndarray.op.L2Normalization"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">L2Normalization</span></code></a>([data,
eps, mode, out, name])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.L2Normalization"
title="mxnet.ndarray.op.L2Normalization"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">L2Normalization</span></code></a>([data,
eps, mode, out, name])</p></td>
<td><p>Normalize the input array using the L2 norm.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.LRN" title="mxnet.ndarray.op.LRN"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">LRN</span></code></a>([data, alpha, beta, knorm, nsize, out,
name])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.LRN" title="mxnet.ndarray.op.LRN"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">LRN</span></code></a>([data, alpha, beta, knorm, nsize, out,
name])</p></td>
<td><p>Applies local response normalization to the input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.LayerNorm" title="mxnet.ndarray.op.LayerNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LayerNorm</span></code></a>([data, gamma, beta, axis, eps,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.LayerNorm" title="mxnet.ndarray.op.LayerNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LayerNorm</span></code></a>([data, gamma, beta, axis, eps,
…])</p></td>
<td><p>Layer normalization.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.LeakyReLU" title="mxnet.ndarray.op.LeakyReLU"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LeakyReLU</span></code></a>([data, gamma, act_type, slope,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.LeakyReLU" title="mxnet.ndarray.op.LeakyReLU"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LeakyReLU</span></code></a>([data, gamma, act_type, slope,
…])</p></td>
<td><p>Applies Leaky rectified linear unit activation element-wise to the
input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.MakeLoss" title="mxnet.ndarray.op.MakeLoss"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">MakeLoss</span></code></a>([data, grad_scale, valid_thresh,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.MakeLoss" title="mxnet.ndarray.op.MakeLoss"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">MakeLoss</span></code></a>([data, grad_scale, valid_thresh,
…])</p></td>
<td><p>Make your own loss function in network construction.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Pad" title="mxnet.ndarray.op.Pad"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Pad</span></code></a>([data, mode, pad_width, constant_value,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Pad" title="mxnet.ndarray.op.Pad"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Pad</span></code></a>([data, mode, pad_width, constant_value,
…])</p></td>
<td><p>Pads an input array with a constant or edge values of the
array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Pooling" title="mxnet.ndarray.op.Pooling"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Pooling</span></code></a>([data, kernel, pool_type, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Pooling" title="mxnet.ndarray.op.Pooling"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Pooling</span></code></a>([data, kernel, pool_type, …])</p></td>
<td><p>Performs pooling on the input.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.Pooling_v1" title="mxnet.ndarray.op.Pooling_v1"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Pooling_v1</span></code></a>([data, kernel, pool_type, …])</p></td>
-<td><p>This operator is DEPRECATED.</p></td>
-</tr>
<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.ndarray.op.RNN" title="mxnet.ndarray.op.RNN"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">RNN</span></code></a>([data, parameters, state, state_cell,
…])</p></td>
<td><p>Applies recurrent layers to input data.</p></td>
</tr>
@@ -2529,48 +2523,6 @@ default layout: NCW for 1d, NCHW for 2d and NCDHW for
3d.NHWC and NDHWC are only
</dd></dl>
<dl class="function">
-<dt id="mxnet.ndarray.op.Convolution_v1">
-<code class="sig-prename descclassname">mxnet.ndarray.op.</code><code
class="sig-name descname">Convolution_v1</code><span
class="sig-paren">(</span><em class="sig-param">data=None</em>, <em
class="sig-param">weight=None</em>, <em class="sig-param">bias=None</em>, <em
class="sig-param">kernel=_Null</em>, <em class="sig-param">stride=_Null</em>,
<em class="sig-param">dilate=_Null</em>, <em class="sig-param">pad=_Null</em>,
<em class="sig-param">num_filter=_Null</em>, <em class="sig-param" [...]
-<dd><p>This operator is DEPRECATED. Apply convolution to input then add a
bias.</p>
-<dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>data</strong> (<a class="reference internal"
href="../ndarray.html#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – Input data to the
ConvolutionV1Op.</p></li>
-<li><p><strong>weight</strong> (<a class="reference internal"
href="../ndarray.html#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – Weight matrix.</p></li>
-<li><p><strong>bias</strong> (<a class="reference internal"
href="../ndarray.html#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – Bias parameter.</p></li>
-<li><p><strong>kernel</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>, </em><em>required</em>)
– convolution kernel size: (h, w) or (d, h, w)</p></li>
-<li><p><strong>stride</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
convolution stride: (h, w) or (d, h, w)</p></li>
-<li><p><strong>dilate</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
convolution dilate: (h, w) or (d, h, w)</p></li>
-<li><p><strong>pad</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) – pad
for convolution: (h, w) or (d, h, w)</p></li>
-<li><p><strong>num_filter</strong> (<em>int</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>required</em>) –
convolution filter(channel) number</p></li>
-<li><p><strong>num_group</strong> (<em>int</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>optional</em><em>,
</em><em>default=1</em>) – Number of group partitions. Equivalent to slicing
input into num_group
-partitions, apply convolution on each, then concatenate the results</p></li>
-<li><p><strong>workspace</strong> (<em>long</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>optional</em><em>,
</em><em>default=1024</em>) – Maximum temporary workspace allowed for
convolution (MB).This parameter determines the effective batch size of the
convolution kernel, which may be smaller than the given batch size. Also, the
workspace will be automatically enlarged to make sure that we can run the
kernel with batch_size=1</p></li>
-<li><p><strong>no_bias</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Whether to disable bias
parameter.</p></li>
-<li><p><strong>cudnn_tune</strong> (<em>{None</em><em>,
</em><em>'fastest'</em><em>, </em><em>'limited_workspace'</em><em>,
</em><em>'off'}</em><em>,</em><em>optional</em><em>,
</em><em>default='None'</em>) – Whether to pick convolution algo by running
performance test.
-Leads to higher startup time but may give faster speed. Options are:
-‘off’: no tuning
-‘limited_workspace’: run test and pick the fastest algorithm that doesn’t
exceed workspace limit.
-‘fastest’: pick the fastest algorithm and ignore workspace limit.
-If set to None (default), behavior is determined by environment
-variable MXNET_CUDNN_AUTOTUNE_DEFAULT: 0 for off,
-1 for limited workspace (default), 2 for fastest.</p></li>
-<li><p><strong>cudnn_off</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Turn off cudnn for this
layer.</p></li>
-<li><p><strong>layout</strong> (<em>{None</em><em>, </em><em>'NCDHW'</em><em>,
</em><em>'NCHW'</em><em>, </em><em>'NDHWC'</em><em>,
</em><em>'NHWC'}</em><em>,</em><em>optional</em><em>,
</em><em>default='None'</em>) – Set layout for input, output and weight. Empty
for
-default layout: NCHW for 2d and NCDHW for 3d.</p></li>
-<li><p><strong>out</strong> (<a class="reference internal"
href="../ndarray.html#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a><em>, </em><em>optional</em>)
– The output NDArray to hold the result.</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p><strong>out</strong> – The output of this
function.</p>
-</dd>
-<dt class="field-odd">Return type</dt>
-<dd class="field-odd"><p><a class="reference internal"
href="../ndarray.html#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray">NDArray</a> or list of NDArrays</p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="function">
<dt id="mxnet.ndarray.op.Correlation">
<code class="sig-prename descclassname">mxnet.ndarray.op.</code><code
class="sig-name descname">Correlation</code><span class="sig-paren">(</span><em
class="sig-param">data1=None</em>, <em class="sig-param">data2=None</em>, <em
class="sig-param">kernel_size=_Null</em>, <em
class="sig-param">max_displacement=_Null</em>, <em
class="sig-param">stride1=_Null</em>, <em class="sig-param">stride2=_Null</em>,
<em class="sig-param">pad_size=_Null</em>, <em
class="sig-param">is_multiply=_Null</em> [...]
<dd><p>Applies correlation to inputs.</p>
@@ -3496,70 +3448,6 @@ default layout: NCW for 1d, NCHW for 2d and NCDHW for
3d.</p></li>
</dd></dl>
<dl class="function">
-<dt id="mxnet.ndarray.op.Pooling_v1">
-<code class="sig-prename descclassname">mxnet.ndarray.op.</code><code
class="sig-name descname">Pooling_v1</code><span class="sig-paren">(</span><em
class="sig-param">data=None</em>, <em class="sig-param">kernel=_Null</em>, <em
class="sig-param">pool_type=_Null</em>, <em
class="sig-param">global_pool=_Null</em>, <em
class="sig-param">pooling_convention=_Null</em>, <em
class="sig-param">stride=_Null</em>, <em class="sig-param">pad=_Null</em>, <em
class="sig-param">out=None</em>, <em class [...]
-<dd><p>This operator is DEPRECATED.
-Perform pooling on the input.</p>
-<p>The shapes for 2-D pooling is</p>
-<ul>
-<li><p><strong>data</strong>: <em>(batch_size, channel, height,
width)</em></p></li>
-<li><p><strong>out</strong>: <em>(batch_size, num_filter, out_height,
out_width)</em>, with:</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">out_height</span> <span
class="o">=</span> <span class="n">f</span><span class="p">(</span><span
class="n">height</span><span class="p">,</span> <span
class="n">kernel</span><span class="p">[</span><span class="mi">0</span><span
class="p">],</span> <span class="n">pad</span><span class="p">[</span><span
class="mi">0</span><span class="p">],</span> <span class="n">stride</span><span
class=" [...]
-<span class="n">out_width</span> <span class="o">=</span> <span
class="n">f</span><span class="p">(</span><span class="n">width</span><span
class="p">,</span> <span class="n">kernel</span><span class="p">[</span><span
class="mi">1</span><span class="p">],</span> <span class="n">pad</span><span
class="p">[</span><span class="mi">1</span><span class="p">],</span> <span
class="n">stride</span><span class="p">[</span><span class="mi">1</span><span
class="p">])</span>
-</pre></div>
-</div>
-</li>
-</ul>
-<p>The definition of <em>f</em> depends on <code class="docutils literal
notranslate"><span class="pre">pooling_convention</span></code>, which has two
options:</p>
-<ul>
-<li><p><strong>valid</strong> (default):</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">f</span><span
class="p">(</span><span class="n">x</span><span class="p">,</span> <span
class="n">k</span><span class="p">,</span> <span class="n">p</span><span
class="p">,</span> <span class="n">s</span><span class="p">)</span> <span
class="o">=</span> <span class="n">floor</span><span class="p">((</span><span
class="n">x</span><span class="o">+</span><span class="mi">2</span><span
class=" [...]
-</pre></div>
-</div>
-</li>
-<li><p><strong>full</strong>, which is compatible with Caffe:</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">f</span><span
class="p">(</span><span class="n">x</span><span class="p">,</span> <span
class="n">k</span><span class="p">,</span> <span class="n">p</span><span
class="p">,</span> <span class="n">s</span><span class="p">)</span> <span
class="o">=</span> <span class="n">ceil</span><span class="p">((</span><span
class="n">x</span><span class="o">+</span><span class="mi">2</span><span
class="o [...]
-</pre></div>
-</div>
-</li>
-</ul>
-<p>But <code class="docutils literal notranslate"><span
class="pre">global_pool</span></code> is set to be true, then do a global
pooling, namely reset
-<code class="docutils literal notranslate"><span
class="pre">kernel=(height,</span> <span class="pre">width)</span></code>.</p>
-<p>Three pooling options are supported by <code class="docutils literal
notranslate"><span class="pre">pool_type</span></code>:</p>
-<ul class="simple">
-<li><p><strong>avg</strong>: average pooling</p></li>
-<li><p><strong>max</strong>: max pooling</p></li>
-<li><p><strong>sum</strong>: sum pooling</p></li>
-</ul>
-<p>1-D pooling is special case of 2-D pooling with <em>weight=1</em> and
-<em>kernel[1]=1</em>.</p>
-<p>For 3-D pooling, an additional <em>depth</em> dimension is added before
-<em>height</em>. Namely the input data will have shape <em>(batch_size,
channel, depth,
-height, width)</em>.</p>
-<p>Defined in /work/mxnet/src/operator/pooling_v1.cc:L104</p>
-<dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>data</strong> (<a class="reference internal"
href="../ndarray.html#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – Input data to the pooling
operator.</p></li>
-<li><p><strong>kernel</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
pooling kernel size: (y, x) or (d, y, x)</p></li>
-<li><p><strong>pool_type</strong> (<em>{'avg'</em><em>,
</em><em>'max'</em><em>, </em><em>'sum'}</em><em>,</em><em>optional</em><em>,
</em><em>default='max'</em>) – Pooling type to be applied.</p></li>
-<li><p><strong>global_pool</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Ignore kernel size, do
global pooling based on current input feature map.</p></li>
-<li><p><strong>pooling_convention</strong> (<em>{'full'</em><em>,
</em><em>'valid'}</em><em>,</em><em>optional</em><em>,
</em><em>default='valid'</em>) – Pooling convention to be applied.</p></li>
-<li><p><strong>stride</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
stride: for pooling (y, x) or (d, y, x)</p></li>
-<li><p><strong>pad</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) – pad
for pooling: (y, x) or (d, y, x)</p></li>
-<li><p><strong>out</strong> (<a class="reference internal"
href="../ndarray.html#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray"><em>NDArray</em></a><em>, </em><em>optional</em>)
– The output NDArray to hold the result.</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p><strong>out</strong> – The output of this
function.</p>
-</dd>
-<dt class="field-odd">Return type</dt>
-<dd class="field-odd"><p><a class="reference internal"
href="../ndarray.html#mxnet.ndarray.NDArray"
title="mxnet.ndarray.NDArray">NDArray</a> or list of NDArrays</p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="function">
<dt id="mxnet.ndarray.op.RNN">
<code class="sig-prename descclassname">mxnet.ndarray.op.</code><code
class="sig-name descname">RNN</code><span class="sig-paren">(</span><em
class="sig-param">data=None</em>, <em class="sig-param">parameters=None</em>,
<em class="sig-param">state=None</em>, <em
class="sig-param">state_cell=None</em>, <em
class="sig-param">sequence_length=None</em>, <em
class="sig-param">state_size=_Null</em>, <em
class="sig-param">num_layers=_Null</em>, <em
class="sig-param">bidirectional=_Null</em>, <e [...]
<dd><p>Applies recurrent layers to input data. Currently, vanilla RNN, LSTM
and GRU are
diff --git a/api/python/docs/api/legacy/symbol/op/index.html
b/api/python/docs/api/legacy/symbol/op/index.html
index a92fcd4..135f9e5 100644
--- a/api/python/docs/api/legacy/symbol/op/index.html
+++ b/api/python/docs/api/legacy/symbol/op/index.html
@@ -1229,76 +1229,70 @@ Show Source
<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Convolution" title="mxnet.symbol.op.Convolution"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Convolution</span></code></a>([data, weight, bias, kernel,
…])</p></td>
<td><p>Compute <em>N</em>-D convolution on <em>(N+2)</em>-D input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Convolution_v1"
title="mxnet.symbol.op.Convolution_v1"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">Convolution_v1</span></code></a>([data,
weight, bias, kernel, …])</p></td>
-<td><p>This operator is DEPRECATED.</p></td>
-</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Correlation" title="mxnet.symbol.op.Correlation"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Correlation</span></code></a>([data1, data2, kernel_size,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Correlation" title="mxnet.symbol.op.Correlation"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Correlation</span></code></a>([data1, data2, kernel_size,
…])</p></td>
<td><p>Applies correlation to inputs.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Crop" title="mxnet.symbol.op.Crop"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Crop</span></code></a>(*data, **kwargs)</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Crop" title="mxnet.symbol.op.Crop"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Crop</span></code></a>(*data, **kwargs)</p></td>
<td><p><div class="admonition note">
<p class="admonition-title">Note</p>
<p><cite>Crop</cite> is deprecated. Use <cite>slice</cite> instead.</p>
</div>
</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Custom" title="mxnet.symbol.op.Custom"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Custom</span></code></a>(*data, **kwargs)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Custom" title="mxnet.symbol.op.Custom"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Custom</span></code></a>(*data, **kwargs)</p></td>
<td><p>Apply a custom operator implemented in a frontend language (like
Python).</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Deconvolution"
title="mxnet.symbol.op.Deconvolution"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">Deconvolution</span></code></a>([data,
weight, bias, kernel, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Deconvolution"
title="mxnet.symbol.op.Deconvolution"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">Deconvolution</span></code></a>([data,
weight, bias, kernel, …])</p></td>
<td><p>Computes 1D or 2D transposed convolution (aka fractionally strided
convolution) of the input tensor.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Dropout" title="mxnet.symbol.op.Dropout"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Dropout</span></code></a>([data, p, mode, axes, cudnn_off,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Dropout" title="mxnet.symbol.op.Dropout"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Dropout</span></code></a>([data, p, mode, axes, cudnn_off,
…])</p></td>
<td><p>Applies dropout operation to input array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.ElementWiseSum"
title="mxnet.symbol.op.ElementWiseSum"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">ElementWiseSum</span></code></a>(*args,
**kwargs)</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.ElementWiseSum"
title="mxnet.symbol.op.ElementWiseSum"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">ElementWiseSum</span></code></a>(*args,
**kwargs)</p></td>
<td><p>Adds all input arguments element-wise.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Embedding" title="mxnet.symbol.op.Embedding"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Embedding</span></code></a>([data, weight, input_dim, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Embedding" title="mxnet.symbol.op.Embedding"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Embedding</span></code></a>([data, weight, input_dim, …])</p></td>
<td><p>Maps integer indices to vector representations (embeddings).</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Flatten" title="mxnet.symbol.op.Flatten"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Flatten</span></code></a>([data, name, attr, out])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Flatten" title="mxnet.symbol.op.Flatten"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Flatten</span></code></a>([data, name, attr, out])</p></td>
<td><p>Flattens the input array into a 2-D array by collapsing the higher
dimensions.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.FullyConnected"
title="mxnet.symbol.op.FullyConnected"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">FullyConnected</span></code></a>([data,
weight, bias, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.FullyConnected"
title="mxnet.symbol.op.FullyConnected"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">FullyConnected</span></code></a>([data,
weight, bias, …])</p></td>
<td><p>Applies a linear transformation: <span class="math notranslate
nohighlight">\(Y = XW^T + b\)</span>.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.GridGenerator"
title="mxnet.symbol.op.GridGenerator"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">GridGenerator</span></code></a>([data,
transform_type, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.GridGenerator"
title="mxnet.symbol.op.GridGenerator"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">GridGenerator</span></code></a>([data,
transform_type, …])</p></td>
<td><p>Generates 2D sampling grid for bilinear sampling.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.GroupNorm" title="mxnet.symbol.op.GroupNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GroupNorm</span></code></a>([data, gamma, beta, num_groups,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.GroupNorm" title="mxnet.symbol.op.GroupNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GroupNorm</span></code></a>([data, gamma, beta, num_groups,
…])</p></td>
<td><p>Group normalization.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.IdentityAttachKLSparseReg"
title="mxnet.symbol.op.IdentityAttachKLSparseReg"><code class="xref py py-obj
docutils literal notranslate"><span
class="pre">IdentityAttachKLSparseReg</span></code></a>([data, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.IdentityAttachKLSparseReg"
title="mxnet.symbol.op.IdentityAttachKLSparseReg"><code class="xref py py-obj
docutils literal notranslate"><span
class="pre">IdentityAttachKLSparseReg</span></code></a>([data, …])</p></td>
<td><p>Apply a sparse regularization to the output a sigmoid activation
function.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.InstanceNorm" title="mxnet.symbol.op.InstanceNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">InstanceNorm</span></code></a>([data, gamma, beta, eps, name,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.InstanceNorm" title="mxnet.symbol.op.InstanceNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">InstanceNorm</span></code></a>([data, gamma, beta, eps, name,
…])</p></td>
<td><p>Applies instance normalization to the n-dimensional input
array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.L2Normalization"
title="mxnet.symbol.op.L2Normalization"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">L2Normalization</span></code></a>([data,
eps, mode, name, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.L2Normalization"
title="mxnet.symbol.op.L2Normalization"><code class="xref py py-obj docutils
literal notranslate"><span class="pre">L2Normalization</span></code></a>([data,
eps, mode, name, …])</p></td>
<td><p>Normalize the input array using the L2 norm.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.LRN" title="mxnet.symbol.op.LRN"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">LRN</span></code></a>([data, alpha, beta, knorm, nsize, name,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.LRN" title="mxnet.symbol.op.LRN"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">LRN</span></code></a>([data, alpha, beta, knorm, nsize, name,
…])</p></td>
<td><p>Applies local response normalization to the input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.LayerNorm" title="mxnet.symbol.op.LayerNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LayerNorm</span></code></a>([data, gamma, beta, axis, eps,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.LayerNorm" title="mxnet.symbol.op.LayerNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LayerNorm</span></code></a>([data, gamma, beta, axis, eps,
…])</p></td>
<td><p>Layer normalization.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.LeakyReLU" title="mxnet.symbol.op.LeakyReLU"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LeakyReLU</span></code></a>([data, gamma, act_type, slope,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.LeakyReLU" title="mxnet.symbol.op.LeakyReLU"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">LeakyReLU</span></code></a>([data, gamma, act_type, slope,
…])</p></td>
<td><p>Applies Leaky rectified linear unit activation element-wise to the
input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.MakeLoss" title="mxnet.symbol.op.MakeLoss"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">MakeLoss</span></code></a>([data, grad_scale, valid_thresh,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.MakeLoss" title="mxnet.symbol.op.MakeLoss"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">MakeLoss</span></code></a>([data, grad_scale, valid_thresh,
…])</p></td>
<td><p>Make your own loss function in network construction.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Pad" title="mxnet.symbol.op.Pad"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Pad</span></code></a>([data, mode, pad_width, constant_value,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Pad" title="mxnet.symbol.op.Pad"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Pad</span></code></a>([data, mode, pad_width, constant_value,
…])</p></td>
<td><p>Pads an input array with a constant or edge values of the
array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Pooling" title="mxnet.symbol.op.Pooling"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Pooling</span></code></a>([data, kernel, pool_type, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Pooling" title="mxnet.symbol.op.Pooling"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Pooling</span></code></a>([data, kernel, pool_type, …])</p></td>
<td><p>Performs pooling on the input.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.op.Pooling_v1" title="mxnet.symbol.op.Pooling_v1"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Pooling_v1</span></code></a>([data, kernel, pool_type, …])</p></td>
-<td><p>This operator is DEPRECATED.</p></td>
-</tr>
<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.op.RNN" title="mxnet.symbol.op.RNN"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">RNN</span></code></a>([data, parameters, state, state_cell,
…])</p></td>
<td><p>Applies recurrent layers to input data.</p></td>
</tr>
@@ -2485,48 +2479,6 @@ default layout: NCW for 1d, NCHW for 2d and NCDHW for
3d.NHWC and NDHWC are only
</dd></dl>
<dl class="function">
-<dt id="mxnet.symbol.op.Convolution_v1">
-<code class="sig-prename descclassname">mxnet.symbol.op.</code><code
class="sig-name descname">Convolution_v1</code><span
class="sig-paren">(</span><em class="sig-param">data=None</em>, <em
class="sig-param">weight=None</em>, <em class="sig-param">bias=None</em>, <em
class="sig-param">kernel=_Null</em>, <em class="sig-param">stride=_Null</em>,
<em class="sig-param">dilate=_Null</em>, <em class="sig-param">pad=_Null</em>,
<em class="sig-param">num_filter=_Null</em>, <em class="sig-param"> [...]
-<dd><p>This operator is DEPRECATED. Apply convolution to input then add a
bias.</p>
-<dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>data</strong> (<a class="reference internal"
href="../symbol.html#mxnet.symbol.Symbol"
title="mxnet.symbol.Symbol"><em>Symbol</em></a>) – Input data to the
ConvolutionV1Op.</p></li>
-<li><p><strong>weight</strong> (<a class="reference internal"
href="../symbol.html#mxnet.symbol.Symbol"
title="mxnet.symbol.Symbol"><em>Symbol</em></a>) – Weight matrix.</p></li>
-<li><p><strong>bias</strong> (<a class="reference internal"
href="../symbol.html#mxnet.symbol.Symbol"
title="mxnet.symbol.Symbol"><em>Symbol</em></a>) – Bias parameter.</p></li>
-<li><p><strong>kernel</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>, </em><em>required</em>)
– convolution kernel size: (h, w) or (d, h, w)</p></li>
-<li><p><strong>stride</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
convolution stride: (h, w) or (d, h, w)</p></li>
-<li><p><strong>dilate</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
convolution dilate: (h, w) or (d, h, w)</p></li>
-<li><p><strong>pad</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) – pad
for convolution: (h, w) or (d, h, w)</p></li>
-<li><p><strong>num_filter</strong> (<em>int</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>required</em>) –
convolution filter(channel) number</p></li>
-<li><p><strong>num_group</strong> (<em>int</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>optional</em><em>,
</em><em>default=1</em>) – Number of group partitions. Equivalent to slicing
input into num_group
-partitions, apply convolution on each, then concatenate the results</p></li>
-<li><p><strong>workspace</strong> (<em>long</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>optional</em><em>,
</em><em>default=1024</em>) – Maximum temporary workspace allowed for
convolution (MB).This parameter determines the effective batch size of the
convolution kernel, which may be smaller than the given batch size. Also, the
workspace will be automatically enlarged to make sure that we can run the
kernel with batch_size=1</p></li>
-<li><p><strong>no_bias</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Whether to disable bias
parameter.</p></li>
-<li><p><strong>cudnn_tune</strong> (<em>{None</em><em>,
</em><em>'fastest'</em><em>, </em><em>'limited_workspace'</em><em>,
</em><em>'off'}</em><em>,</em><em>optional</em><em>,
</em><em>default='None'</em>) – Whether to pick convolution algo by running
performance test.
-Leads to higher startup time but may give faster speed. Options are:
-‘off’: no tuning
-‘limited_workspace’: run test and pick the fastest algorithm that doesn’t
exceed workspace limit.
-‘fastest’: pick the fastest algorithm and ignore workspace limit.
-If set to None (default), behavior is determined by environment
-variable MXNET_CUDNN_AUTOTUNE_DEFAULT: 0 for off,
-1 for limited workspace (default), 2 for fastest.</p></li>
-<li><p><strong>cudnn_off</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Turn off cudnn for this
layer.</p></li>
-<li><p><strong>layout</strong> (<em>{None</em><em>, </em><em>'NCDHW'</em><em>,
</em><em>'NCHW'</em><em>, </em><em>'NDHWC'</em><em>,
</em><em>'NHWC'}</em><em>,</em><em>optional</em><em>,
</em><em>default='None'</em>) – Set layout for input, output and weight. Empty
for
-default layout: NCHW for 2d and NCDHW for 3d.</p></li>
-<li><p><strong>name</strong> (<em>string</em><em>, </em><em>optional.</em>) –
Name of the resulting symbol.</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p>The result symbol.</p>
-</dd>
-<dt class="field-odd">Return type</dt>
-<dd class="field-odd"><p><a class="reference internal"
href="../symbol.html#mxnet.symbol.Symbol"
title="mxnet.symbol.Symbol">Symbol</a></p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="function">
<dt id="mxnet.symbol.op.Correlation">
<code class="sig-prename descclassname">mxnet.symbol.op.</code><code
class="sig-name descname">Correlation</code><span class="sig-paren">(</span><em
class="sig-param">data1=None</em>, <em class="sig-param">data2=None</em>, <em
class="sig-param">kernel_size=_Null</em>, <em
class="sig-param">max_displacement=_Null</em>, <em
class="sig-param">stride1=_Null</em>, <em class="sig-param">stride2=_Null</em>,
<em class="sig-param">pad_size=_Null</em>, <em
class="sig-param">is_multiply=_Null</em>, [...]
<dd><p>Applies correlation to inputs.</p>
@@ -3454,70 +3406,6 @@ default layout: NCW for 1d, NCHW for 2d and NCDHW for
3d.</p></li>
</dd></dl>
<dl class="function">
-<dt id="mxnet.symbol.op.Pooling_v1">
-<code class="sig-prename descclassname">mxnet.symbol.op.</code><code
class="sig-name descname">Pooling_v1</code><span class="sig-paren">(</span><em
class="sig-param">data=None</em>, <em class="sig-param">kernel=_Null</em>, <em
class="sig-param">pool_type=_Null</em>, <em
class="sig-param">global_pool=_Null</em>, <em
class="sig-param">pooling_convention=_Null</em>, <em
class="sig-param">stride=_Null</em>, <em class="sig-param">pad=_Null</em>, <em
class="sig-param">name=None</em>, <em class [...]
-<dd><p>This operator is DEPRECATED.
-Perform pooling on the input.</p>
-<p>The shapes for 2-D pooling is</p>
-<ul>
-<li><p><strong>data</strong>: <em>(batch_size, channel, height,
width)</em></p></li>
-<li><p><strong>out</strong>: <em>(batch_size, num_filter, out_height,
out_width)</em>, with:</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">out_height</span> <span
class="o">=</span> <span class="n">f</span><span class="p">(</span><span
class="n">height</span><span class="p">,</span> <span
class="n">kernel</span><span class="p">[</span><span class="mi">0</span><span
class="p">],</span> <span class="n">pad</span><span class="p">[</span><span
class="mi">0</span><span class="p">],</span> <span class="n">stride</span><span
class=" [...]
-<span class="n">out_width</span> <span class="o">=</span> <span
class="n">f</span><span class="p">(</span><span class="n">width</span><span
class="p">,</span> <span class="n">kernel</span><span class="p">[</span><span
class="mi">1</span><span class="p">],</span> <span class="n">pad</span><span
class="p">[</span><span class="mi">1</span><span class="p">],</span> <span
class="n">stride</span><span class="p">[</span><span class="mi">1</span><span
class="p">])</span>
-</pre></div>
-</div>
-</li>
-</ul>
-<p>The definition of <em>f</em> depends on <code class="docutils literal
notranslate"><span class="pre">pooling_convention</span></code>, which has two
options:</p>
-<ul>
-<li><p><strong>valid</strong> (default):</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">f</span><span
class="p">(</span><span class="n">x</span><span class="p">,</span> <span
class="n">k</span><span class="p">,</span> <span class="n">p</span><span
class="p">,</span> <span class="n">s</span><span class="p">)</span> <span
class="o">=</span> <span class="n">floor</span><span class="p">((</span><span
class="n">x</span><span class="o">+</span><span class="mi">2</span><span
class=" [...]
-</pre></div>
-</div>
-</li>
-<li><p><strong>full</strong>, which is compatible with Caffe:</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">f</span><span
class="p">(</span><span class="n">x</span><span class="p">,</span> <span
class="n">k</span><span class="p">,</span> <span class="n">p</span><span
class="p">,</span> <span class="n">s</span><span class="p">)</span> <span
class="o">=</span> <span class="n">ceil</span><span class="p">((</span><span
class="n">x</span><span class="o">+</span><span class="mi">2</span><span
class="o [...]
-</pre></div>
-</div>
-</li>
-</ul>
-<p>But <code class="docutils literal notranslate"><span
class="pre">global_pool</span></code> is set to be true, then do a global
pooling, namely reset
-<code class="docutils literal notranslate"><span
class="pre">kernel=(height,</span> <span class="pre">width)</span></code>.</p>
-<p>Three pooling options are supported by <code class="docutils literal
notranslate"><span class="pre">pool_type</span></code>:</p>
-<ul class="simple">
-<li><p><strong>avg</strong>: average pooling</p></li>
-<li><p><strong>max</strong>: max pooling</p></li>
-<li><p><strong>sum</strong>: sum pooling</p></li>
-</ul>
-<p>1-D pooling is special case of 2-D pooling with <em>weight=1</em> and
-<em>kernel[1]=1</em>.</p>
-<p>For 3-D pooling, an additional <em>depth</em> dimension is added before
-<em>height</em>. Namely the input data will have shape <em>(batch_size,
channel, depth,
-height, width)</em>.</p>
-<p>Defined in /work/mxnet/src/operator/pooling_v1.cc:L104</p>
-<dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>data</strong> (<a class="reference internal"
href="../symbol.html#mxnet.symbol.Symbol"
title="mxnet.symbol.Symbol"><em>Symbol</em></a>) – Input data to the pooling
operator.</p></li>
-<li><p><strong>kernel</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
pooling kernel size: (y, x) or (d, y, x)</p></li>
-<li><p><strong>pool_type</strong> (<em>{'avg'</em><em>,
</em><em>'max'</em><em>, </em><em>'sum'}</em><em>,</em><em>optional</em><em>,
</em><em>default='max'</em>) – Pooling type to be applied.</p></li>
-<li><p><strong>global_pool</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Ignore kernel size, do
global pooling based on current input feature map.</p></li>
-<li><p><strong>pooling_convention</strong> (<em>{'full'</em><em>,
</em><em>'valid'}</em><em>,</em><em>optional</em><em>,
</em><em>default='valid'</em>) – Pooling convention to be applied.</p></li>
-<li><p><strong>stride</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
stride: for pooling (y, x) or (d, y, x)</p></li>
-<li><p><strong>pad</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) – pad
for pooling: (y, x) or (d, y, x)</p></li>
-<li><p><strong>name</strong> (<em>string</em><em>, </em><em>optional.</em>) –
Name of the resulting symbol.</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p>The result symbol.</p>
-</dd>
-<dt class="field-odd">Return type</dt>
-<dd class="field-odd"><p><a class="reference internal"
href="../symbol.html#mxnet.symbol.Symbol"
title="mxnet.symbol.Symbol">Symbol</a></p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="function">
<dt id="mxnet.symbol.op.RNN">
<code class="sig-prename descclassname">mxnet.symbol.op.</code><code
class="sig-name descname">RNN</code><span class="sig-paren">(</span><em
class="sig-param">data=None</em>, <em class="sig-param">parameters=None</em>,
<em class="sig-param">state=None</em>, <em
class="sig-param">state_cell=None</em>, <em
class="sig-param">sequence_length=None</em>, <em
class="sig-param">state_size=_Null</em>, <em
class="sig-param">num_layers=_Null</em>, <em
class="sig-param">bidirectional=_Null</em>, <em [...]
<dd><p>Applies recurrent layers to input data. Currently, vanilla RNN, LSTM
and GRU are
diff --git a/api/python/docs/api/legacy/symbol/symbol.html
b/api/python/docs/api/legacy/symbol/symbol.html
index 2c8aa25..1e3d93c 100644
--- a/api/python/docs/api/legacy/symbol/symbol.html
+++ b/api/python/docs/api/legacy/symbol/symbol.html
@@ -1229,76 +1229,70 @@ Show Source
<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Convolution" title="mxnet.symbol.Convolution"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Convolution</span></code></a>([data, weight, bias, kernel,
…])</p></td>
<td><p>Compute <em>N</em>-D convolution on <em>(N+2)</em>-D input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Convolution_v1" title="mxnet.symbol.Convolution_v1"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Convolution_v1</span></code></a>([data, weight, bias, kernel,
…])</p></td>
-<td><p>This operator is DEPRECATED.</p></td>
-</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Correlation" title="mxnet.symbol.Correlation"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Correlation</span></code></a>([data1, data2, kernel_size,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Correlation" title="mxnet.symbol.Correlation"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Correlation</span></code></a>([data1, data2, kernel_size,
…])</p></td>
<td><p>Applies correlation to inputs.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Crop" title="mxnet.symbol.Crop"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">Crop</span></code></a>(*data,
**kwargs)</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Crop" title="mxnet.symbol.Crop"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">Crop</span></code></a>(*data,
**kwargs)</p></td>
<td><p><div class="admonition note">
<p class="admonition-title">Note</p>
<p><cite>Crop</cite> is deprecated. Use <cite>slice</cite> instead.</p>
</div>
</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Custom" title="mxnet.symbol.Custom"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Custom</span></code></a>(*data, **kwargs)</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Custom" title="mxnet.symbol.Custom"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Custom</span></code></a>(*data, **kwargs)</p></td>
<td><p>Apply a custom operator implemented in a frontend language (like
Python).</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Deconvolution" title="mxnet.symbol.Deconvolution"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Deconvolution</span></code></a>([data, weight, bias, kernel,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Deconvolution" title="mxnet.symbol.Deconvolution"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Deconvolution</span></code></a>([data, weight, bias, kernel,
…])</p></td>
<td><p>Computes 1D or 2D transposed convolution (aka fractionally strided
convolution) of the input tensor.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Dropout" title="mxnet.symbol.Dropout"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Dropout</span></code></a>([data, p, mode, axes, cudnn_off,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Dropout" title="mxnet.symbol.Dropout"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Dropout</span></code></a>([data, p, mode, axes, cudnn_off,
…])</p></td>
<td><p>Applies dropout operation to input array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.ElementWiseSum" title="mxnet.symbol.ElementWiseSum"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">ElementWiseSum</span></code></a>(*args, **kwargs)</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.ElementWiseSum" title="mxnet.symbol.ElementWiseSum"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">ElementWiseSum</span></code></a>(*args, **kwargs)</p></td>
<td><p>Adds all input arguments element-wise.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Embedding" title="mxnet.symbol.Embedding"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Embedding</span></code></a>([data, weight, input_dim, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Embedding" title="mxnet.symbol.Embedding"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">Embedding</span></code></a>([data, weight, input_dim, …])</p></td>
<td><p>Maps integer indices to vector representations (embeddings).</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Flatten" title="mxnet.symbol.Flatten"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Flatten</span></code></a>([data, name, attr, out])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Flatten" title="mxnet.symbol.Flatten"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Flatten</span></code></a>([data, name, attr, out])</p></td>
<td><p>Flattens the input array into a 2-D array by collapsing the higher
dimensions.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.FullyConnected" title="mxnet.symbol.FullyConnected"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">FullyConnected</span></code></a>([data, weight, bias, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.FullyConnected" title="mxnet.symbol.FullyConnected"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">FullyConnected</span></code></a>([data, weight, bias, …])</p></td>
<td><p>Applies a linear transformation: <span class="math notranslate
nohighlight">\(Y = XW^T + b\)</span>.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.GridGenerator" title="mxnet.symbol.GridGenerator"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GridGenerator</span></code></a>([data, transform_type, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.GridGenerator" title="mxnet.symbol.GridGenerator"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">GridGenerator</span></code></a>([data, transform_type, …])</p></td>
<td><p>Generates 2D sampling grid for bilinear sampling.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.GroupNorm" title="mxnet.symbol.GroupNorm"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">GroupNorm</span></code></a>([data, gamma, beta, num_groups,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.GroupNorm" title="mxnet.symbol.GroupNorm"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">GroupNorm</span></code></a>([data, gamma, beta, num_groups,
…])</p></td>
<td><p>Group normalization.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.IdentityAttachKLSparseReg"
title="mxnet.symbol.IdentityAttachKLSparseReg"><code class="xref py py-obj
docutils literal notranslate"><span
class="pre">IdentityAttachKLSparseReg</span></code></a>([data, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.IdentityAttachKLSparseReg"
title="mxnet.symbol.IdentityAttachKLSparseReg"><code class="xref py py-obj
docutils literal notranslate"><span
class="pre">IdentityAttachKLSparseReg</span></code></a>([data, …])</p></td>
<td><p>Apply a sparse regularization to the output a sigmoid activation
function.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.InstanceNorm" title="mxnet.symbol.InstanceNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">InstanceNorm</span></code></a>([data, gamma, beta, eps, name,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.InstanceNorm" title="mxnet.symbol.InstanceNorm"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">InstanceNorm</span></code></a>([data, gamma, beta, eps, name,
…])</p></td>
<td><p>Applies instance normalization to the n-dimensional input
array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.L2Normalization" title="mxnet.symbol.L2Normalization"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">L2Normalization</span></code></a>([data, eps, mode, name,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.L2Normalization" title="mxnet.symbol.L2Normalization"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">L2Normalization</span></code></a>([data, eps, mode, name,
…])</p></td>
<td><p>Normalize the input array using the L2 norm.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.LRN" title="mxnet.symbol.LRN"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">LRN</span></code></a>([data,
alpha, beta, knorm, nsize, name, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.LRN" title="mxnet.symbol.LRN"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">LRN</span></code></a>([data,
alpha, beta, knorm, nsize, name, …])</p></td>
<td><p>Applies local response normalization to the input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.LayerNorm" title="mxnet.symbol.LayerNorm"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">LayerNorm</span></code></a>([data, gamma, beta, axis, eps,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.LayerNorm" title="mxnet.symbol.LayerNorm"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">LayerNorm</span></code></a>([data, gamma, beta, axis, eps,
…])</p></td>
<td><p>Layer normalization.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.LeakyReLU" title="mxnet.symbol.LeakyReLU"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">LeakyReLU</span></code></a>([data, gamma, act_type, slope,
…])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.LeakyReLU" title="mxnet.symbol.LeakyReLU"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">LeakyReLU</span></code></a>([data, gamma, act_type, slope,
…])</p></td>
<td><p>Applies Leaky rectified linear unit activation element-wise to the
input.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.MakeLoss" title="mxnet.symbol.MakeLoss"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">MakeLoss</span></code></a>([data, grad_scale, valid_thresh,
…])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.MakeLoss" title="mxnet.symbol.MakeLoss"><code class="xref
py py-obj docutils literal notranslate"><span
class="pre">MakeLoss</span></code></a>([data, grad_scale, valid_thresh,
…])</p></td>
<td><p>Make your own loss function in network construction.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Pad" title="mxnet.symbol.Pad"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">Pad</span></code></a>([data,
mode, pad_width, constant_value, …])</p></td>
+<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Pad" title="mxnet.symbol.Pad"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">Pad</span></code></a>([data,
mode, pad_width, constant_value, …])</p></td>
<td><p>Pads an input array with a constant or edge values of the
array.</p></td>
</tr>
-<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.Pooling" title="mxnet.symbol.Pooling"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Pooling</span></code></a>([data, kernel, pool_type, …])</p></td>
+<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Pooling" title="mxnet.symbol.Pooling"><code class="xref py
py-obj docutils literal notranslate"><span
class="pre">Pooling</span></code></a>([data, kernel, pool_type, …])</p></td>
<td><p>Performs pooling on the input.</p></td>
</tr>
-<tr class="row-even"><td><p><a class="reference internal"
href="#mxnet.symbol.Pooling_v1" title="mxnet.symbol.Pooling_v1"><code
class="xref py py-obj docutils literal notranslate"><span
class="pre">Pooling_v1</span></code></a>([data, kernel, pool_type, …])</p></td>
-<td><p>This operator is DEPRECATED.</p></td>
-</tr>
<tr class="row-odd"><td><p><a class="reference internal"
href="#mxnet.symbol.RNN" title="mxnet.symbol.RNN"><code class="xref py py-obj
docutils literal notranslate"><span class="pre">RNN</span></code></a>([data,
parameters, state, state_cell, …])</p></td>
<td><p>Applies recurrent layers to input data.</p></td>
</tr>
@@ -2551,48 +2545,6 @@ default layout: NCW for 1d, NCHW for 2d and NCDHW for
3d.NHWC and NDHWC are only
</dd></dl>
<dl class="function">
-<dt id="mxnet.symbol.Convolution_v1">
-<code class="sig-prename descclassname">mxnet.symbol.</code><code
class="sig-name descname">Convolution_v1</code><span
class="sig-paren">(</span><em class="sig-param">data=None</em>, <em
class="sig-param">weight=None</em>, <em class="sig-param">bias=None</em>, <em
class="sig-param">kernel=_Null</em>, <em class="sig-param">stride=_Null</em>,
<em class="sig-param">dilate=_Null</em>, <em class="sig-param">pad=_Null</em>,
<em class="sig-param">num_filter=_Null</em>, <em class="sig-param">num [...]
-<dd><p>This operator is DEPRECATED. Apply convolution to input then add a
bias.</p>
-<dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>data</strong> (<a class="reference internal"
href="#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a>) –
Input data to the ConvolutionV1Op.</p></li>
-<li><p><strong>weight</strong> (<a class="reference internal"
href="#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a>) –
Weight matrix.</p></li>
-<li><p><strong>bias</strong> (<a class="reference internal"
href="#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a>) –
Bias parameter.</p></li>
-<li><p><strong>kernel</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>, </em><em>required</em>)
– convolution kernel size: (h, w) or (d, h, w)</p></li>
-<li><p><strong>stride</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
convolution stride: (h, w) or (d, h, w)</p></li>
-<li><p><strong>dilate</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
convolution dilate: (h, w) or (d, h, w)</p></li>
-<li><p><strong>pad</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) – pad
for convolution: (h, w) or (d, h, w)</p></li>
-<li><p><strong>num_filter</strong> (<em>int</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>required</em>) –
convolution filter(channel) number</p></li>
-<li><p><strong>num_group</strong> (<em>int</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>optional</em><em>,
</em><em>default=1</em>) – Number of group partitions. Equivalent to slicing
input into num_group
-partitions, apply convolution on each, then concatenate the results</p></li>
-<li><p><strong>workspace</strong> (<em>long</em><em>
(</em><em>non-negative</em><em>)</em><em>, </em><em>optional</em><em>,
</em><em>default=1024</em>) – Maximum temporary workspace allowed for
convolution (MB).This parameter determines the effective batch size of the
convolution kernel, which may be smaller than the given batch size. Also, the
workspace will be automatically enlarged to make sure that we can run the
kernel with batch_size=1</p></li>
-<li><p><strong>no_bias</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Whether to disable bias
parameter.</p></li>
-<li><p><strong>cudnn_tune</strong> (<em>{None</em><em>,
</em><em>'fastest'</em><em>, </em><em>'limited_workspace'</em><em>,
</em><em>'off'}</em><em>,</em><em>optional</em><em>,
</em><em>default='None'</em>) – Whether to pick convolution algo by running
performance test.
-Leads to higher startup time but may give faster speed. Options are:
-‘off’: no tuning
-‘limited_workspace’: run test and pick the fastest algorithm that doesn’t
exceed workspace limit.
-‘fastest’: pick the fastest algorithm and ignore workspace limit.
-If set to None (default), behavior is determined by environment
-variable MXNET_CUDNN_AUTOTUNE_DEFAULT: 0 for off,
-1 for limited workspace (default), 2 for fastest.</p></li>
-<li><p><strong>cudnn_off</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Turn off cudnn for this
layer.</p></li>
-<li><p><strong>layout</strong> (<em>{None</em><em>, </em><em>'NCDHW'</em><em>,
</em><em>'NCHW'</em><em>, </em><em>'NDHWC'</em><em>,
</em><em>'NHWC'}</em><em>,</em><em>optional</em><em>,
</em><em>default='None'</em>) – Set layout for input, output and weight. Empty
for
-default layout: NCHW for 2d and NCDHW for 3d.</p></li>
-<li><p><strong>name</strong> (<em>string</em><em>, </em><em>optional.</em>) –
Name of the resulting symbol.</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p>The result symbol.</p>
-</dd>
-<dt class="field-odd">Return type</dt>
-<dd class="field-odd"><p><a class="reference internal"
href="#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol">Symbol</a></p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="function">
<dt id="mxnet.symbol.Correlation">
<code class="sig-prename descclassname">mxnet.symbol.</code><code
class="sig-name descname">Correlation</code><span class="sig-paren">(</span><em
class="sig-param">data1=None</em>, <em class="sig-param">data2=None</em>, <em
class="sig-param">kernel_size=_Null</em>, <em
class="sig-param">max_displacement=_Null</em>, <em
class="sig-param">stride1=_Null</em>, <em class="sig-param">stride2=_Null</em>,
<em class="sig-param">pad_size=_Null</em>, <em
class="sig-param">is_multiply=_Null</em>, <e [...]
<dd><p>Applies correlation to inputs.</p>
@@ -3520,70 +3472,6 @@ default layout: NCW for 1d, NCHW for 2d and NCDHW for
3d.</p></li>
</dd></dl>
<dl class="function">
-<dt id="mxnet.symbol.Pooling_v1">
-<code class="sig-prename descclassname">mxnet.symbol.</code><code
class="sig-name descname">Pooling_v1</code><span class="sig-paren">(</span><em
class="sig-param">data=None</em>, <em class="sig-param">kernel=_Null</em>, <em
class="sig-param">pool_type=_Null</em>, <em
class="sig-param">global_pool=_Null</em>, <em
class="sig-param">pooling_convention=_Null</em>, <em
class="sig-param">stride=_Null</em>, <em class="sig-param">pad=_Null</em>, <em
class="sig-param">name=None</em>, <em class="s [...]
-<dd><p>This operator is DEPRECATED.
-Perform pooling on the input.</p>
-<p>The shapes for 2-D pooling is</p>
-<ul>
-<li><p><strong>data</strong>: <em>(batch_size, channel, height,
width)</em></p></li>
-<li><p><strong>out</strong>: <em>(batch_size, num_filter, out_height,
out_width)</em>, with:</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">out_height</span> <span
class="o">=</span> <span class="n">f</span><span class="p">(</span><span
class="n">height</span><span class="p">,</span> <span
class="n">kernel</span><span class="p">[</span><span class="mi">0</span><span
class="p">],</span> <span class="n">pad</span><span class="p">[</span><span
class="mi">0</span><span class="p">],</span> <span class="n">stride</span><span
class=" [...]
-<span class="n">out_width</span> <span class="o">=</span> <span
class="n">f</span><span class="p">(</span><span class="n">width</span><span
class="p">,</span> <span class="n">kernel</span><span class="p">[</span><span
class="mi">1</span><span class="p">],</span> <span class="n">pad</span><span
class="p">[</span><span class="mi">1</span><span class="p">],</span> <span
class="n">stride</span><span class="p">[</span><span class="mi">1</span><span
class="p">])</span>
-</pre></div>
-</div>
-</li>
-</ul>
-<p>The definition of <em>f</em> depends on <code class="docutils literal
notranslate"><span class="pre">pooling_convention</span></code>, which has two
options:</p>
-<ul>
-<li><p><strong>valid</strong> (default):</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">f</span><span
class="p">(</span><span class="n">x</span><span class="p">,</span> <span
class="n">k</span><span class="p">,</span> <span class="n">p</span><span
class="p">,</span> <span class="n">s</span><span class="p">)</span> <span
class="o">=</span> <span class="n">floor</span><span class="p">((</span><span
class="n">x</span><span class="o">+</span><span class="mi">2</span><span
class=" [...]
-</pre></div>
-</div>
-</li>
-<li><p><strong>full</strong>, which is compatible with Caffe:</p>
-<div class="highlight-default notranslate"><div
class="highlight"><pre><span></span><span class="n">f</span><span
class="p">(</span><span class="n">x</span><span class="p">,</span> <span
class="n">k</span><span class="p">,</span> <span class="n">p</span><span
class="p">,</span> <span class="n">s</span><span class="p">)</span> <span
class="o">=</span> <span class="n">ceil</span><span class="p">((</span><span
class="n">x</span><span class="o">+</span><span class="mi">2</span><span
class="o [...]
-</pre></div>
-</div>
-</li>
-</ul>
-<p>But <code class="docutils literal notranslate"><span
class="pre">global_pool</span></code> is set to be true, then do a global
pooling, namely reset
-<code class="docutils literal notranslate"><span
class="pre">kernel=(height,</span> <span class="pre">width)</span></code>.</p>
-<p>Three pooling options are supported by <code class="docutils literal
notranslate"><span class="pre">pool_type</span></code>:</p>
-<ul class="simple">
-<li><p><strong>avg</strong>: average pooling</p></li>
-<li><p><strong>max</strong>: max pooling</p></li>
-<li><p><strong>sum</strong>: sum pooling</p></li>
-</ul>
-<p>1-D pooling is special case of 2-D pooling with <em>weight=1</em> and
-<em>kernel[1]=1</em>.</p>
-<p>For 3-D pooling, an additional <em>depth</em> dimension is added before
-<em>height</em>. Namely the input data will have shape <em>(batch_size,
channel, depth,
-height, width)</em>.</p>
-<p>Defined in /work/mxnet/src/operator/pooling_v1.cc:L104</p>
-<dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>data</strong> (<a class="reference internal"
href="#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a>) –
Input data to the pooling operator.</p></li>
-<li><p><strong>kernel</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
pooling kernel size: (y, x) or (d, y, x)</p></li>
-<li><p><strong>pool_type</strong> (<em>{'avg'</em><em>,
</em><em>'max'</em><em>, </em><em>'sum'}</em><em>,</em><em>optional</em><em>,
</em><em>default='max'</em>) – Pooling type to be applied.</p></li>
-<li><p><strong>global_pool</strong> (<em>boolean</em><em>,
</em><em>optional</em><em>, </em><em>default=0</em>) – Ignore kernel size, do
global pooling based on current input feature map.</p></li>
-<li><p><strong>pooling_convention</strong> (<em>{'full'</em><em>,
</em><em>'valid'}</em><em>,</em><em>optional</em><em>,
</em><em>default='valid'</em>) – Pooling convention to be applied.</p></li>
-<li><p><strong>stride</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) –
stride: for pooling (y, x) or (d, y, x)</p></li>
-<li><p><strong>pad</strong>
(<em>Shape</em><em>(</em><em>tuple</em><em>)</em><em>,
</em><em>optional</em><em>, </em><em>default=</em><em>[</em><em>]</em>) – pad
for pooling: (y, x) or (d, y, x)</p></li>
-<li><p><strong>name</strong> (<em>string</em><em>, </em><em>optional.</em>) –
Name of the resulting symbol.</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p>The result symbol.</p>
-</dd>
-<dt class="field-odd">Return type</dt>
-<dd class="field-odd"><p><a class="reference internal"
href="#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol">Symbol</a></p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="function">
<dt id="mxnet.symbol.RNN">
<code class="sig-prename descclassname">mxnet.symbol.</code><code
class="sig-name descname">RNN</code><span class="sig-paren">(</span><em
class="sig-param">data=None</em>, <em class="sig-param">parameters=None</em>,
<em class="sig-param">state=None</em>, <em
class="sig-param">state_cell=None</em>, <em
class="sig-param">sequence_length=None</em>, <em
class="sig-param">state_size=_Null</em>, <em
class="sig-param">num_layers=_Null</em>, <em
class="sig-param">bidirectional=_Null</em>, <em cl [...]
<dd><p>Applies recurrent layers to input data. Currently, vanilla RNN, LSTM
and GRU are
diff --git a/api/python/docs/genindex.html b/api/python/docs/genindex.html
index e5971ab..d60cae5 100644
--- a/api/python/docs/genindex.html
+++ b/api/python/docs/genindex.html
@@ -2693,12 +2693,12 @@
</li>
<li><a
href="api/gluon/contrib/index.html#mxnet.gluon.contrib.rnn.Conv2DGRUCell">Conv2DGRUCell
(class in mxnet.gluon.contrib.rnn)</a>
</li>
+ </ul></td>
+ <td style="width: 33%; vertical-align: top;"><ul>
<li><a
href="api/gluon/contrib/index.html#mxnet.gluon.contrib.rnn.Conv2DLSTMCell">Conv2DLSTMCell
(class in mxnet.gluon.contrib.rnn)</a>
</li>
<li><a
href="api/gluon/contrib/index.html#mxnet.gluon.contrib.rnn.Conv2DRNNCell">Conv2DRNNCell
(class in mxnet.gluon.contrib.rnn)</a>
</li>
- </ul></td>
- <td style="width: 33%; vertical-align: top;"><ul>
<li><a
href="api/gluon/nn/index.html#mxnet.gluon.nn.Conv2DTranspose">Conv2DTranspose
(class in mxnet.gluon.nn)</a>
</li>
<li><a href="api/gluon/nn/index.html#mxnet.gluon.nn.Conv3D">Conv3D
(class in mxnet.gluon.nn)</a>
@@ -2725,16 +2725,6 @@
<li><a
href="api/legacy/symbol/op/index.html#mxnet.symbol.op.Convolution">(in module
mxnet.symbol.op)</a>
</li>
</ul></li>
- <li><a
href="api/legacy/ndarray/ndarray.html#mxnet.ndarray.Convolution_v1">Convolution_v1()
(in module mxnet.ndarray)</a>
-
- <ul>
- <li><a
href="api/legacy/ndarray/op/index.html#mxnet.ndarray.op.Convolution_v1">(in
module mxnet.ndarray.op)</a>
-</li>
- <li><a
href="api/legacy/symbol/symbol.html#mxnet.symbol.Convolution_v1">(in module
mxnet.symbol)</a>
-</li>
- <li><a
href="api/legacy/symbol/op/index.html#mxnet.symbol.op.Convolution_v1">(in
module mxnet.symbol.op)</a>
-</li>
- </ul></li>
<li><a href="api/np/generated/mxnet.np.copy.html#mxnet.np.copy">copy()
(in module mxnet.np)</a>
<ul>
@@ -6495,20 +6485,10 @@
<li><a
href="api/legacy/symbol/op/index.html#mxnet.symbol.op.Pooling">(in module
mxnet.symbol.op)</a>
</li>
</ul></li>
- <li><a
href="api/legacy/ndarray/ndarray.html#mxnet.ndarray.Pooling_v1">Pooling_v1()
(in module mxnet.ndarray)</a>
-
- <ul>
- <li><a
href="api/legacy/ndarray/op/index.html#mxnet.ndarray.op.Pooling_v1">(in module
mxnet.ndarray.op)</a>
-</li>
- <li><a
href="api/legacy/symbol/symbol.html#mxnet.symbol.Pooling_v1">(in module
mxnet.symbol)</a>
-</li>
- <li><a
href="api/legacy/symbol/op/index.html#mxnet.symbol.op.Pooling_v1">(in module
mxnet.symbol.op)</a>
+ <li><a
href="api/np/generated/mxnet.np.positive.html#mxnet.np.positive">positive (in
module mxnet.np)</a>
</li>
- </ul></li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
- <li><a
href="api/np/generated/mxnet.np.positive.html#mxnet.np.positive">positive (in
module mxnet.np)</a>
-</li>
<li><a
href="api/legacy/image/index.html#mxnet.image.ImageIter.postprocess_data">postprocess_data()
(mxnet.image.ImageIter method)</a>
</li>
<li><a
href="api/legacy/ndarray/linalg/index.html#mxnet.ndarray.linalg.potrf">potrf()
(in module mxnet.ndarray.linalg)</a>
diff --git a/api/python/docs/objects.inv b/api/python/docs/objects.inv
index 77ba1e9..a4dbd04 100644
Binary files a/api/python/docs/objects.inv and b/api/python/docs/objects.inv
differ
diff --git a/api/python/docs/searchindex.js b/api/python/docs/searchindex.js
index 65125ae..63c596b 100644
--- a/api/python/docs/searchindex.js
+++ b/api/python/docs/searchindex.js
@@ -1 +1 @@
-Search.setIndex({docnames:["api/autograd/index","api/context/index","api/contrib/autograd/index","api/contrib/index","api/contrib/io/index","api/contrib/ndarray/index","api/contrib/onnx/index","api/contrib/quantization/index","api/contrib/symbol/index","api/contrib/tensorboard/index","api/contrib/tensorrt/index","api/contrib/text/index","api/engine/index","api/executor/index","api/gluon/block","api/gluon/constant","api/gluon/contrib/index","api/gluon/data/index","api/gluon/data/vision/da
[...]
\ No newline at end of file
+Search.setIndex({docnames:["api/autograd/index","api/context/index","api/contrib/autograd/index","api/contrib/index","api/contrib/io/index","api/contrib/ndarray/index","api/contrib/onnx/index","api/contrib/quantization/index","api/contrib/symbol/index","api/contrib/tensorboard/index","api/contrib/tensorrt/index","api/contrib/text/index","api/engine/index","api/executor/index","api/gluon/block","api/gluon/constant","api/gluon/contrib/index","api/gluon/data/index","api/gluon/data/vision/da
[...]
\ No newline at end of file
diff --git a/date.txt b/date.txt
deleted file mode 100644
index b5eed31..0000000
--- a/date.txt
+++ /dev/null
@@ -1 +0,0 @@
-Sun Aug 23 18:46:09 UTC 2020
diff --git a/feed.xml b/feed.xml
index ebe3493..f34c905 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8"?><feed
xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/"
version="4.0.0">Jekyll</generator><link
href="https://mxnet.apache.org/feed.xml" rel="self" type="application/atom+xml"
/><link href="https://mxnet.apache.org/" rel="alternate" type="text/html"
/><updated>2020-08-23T18:35:07+00:00</updated><id>https://mxnet.apache.org/feed.xml</id><title
type="html">Apache MXNet</title><subtitle>A flexible and efficient library for
deep [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8"?><feed
xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/"
version="4.0.0">Jekyll</generator><link
href="https://mxnet.apache.org/feed.xml" rel="self" type="application/atom+xml"
/><link href="https://mxnet.apache.org/" rel="alternate" type="text/html"
/><updated>2020-08-24T00:34:57+00:00</updated><id>https://mxnet.apache.org/feed.xml</id><title
type="html">Apache MXNet</title><subtitle>A flexible and efficient library for
deep [...]
\ No newline at end of file