bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140676020
##########
File path: docs/tutorials/sparse/rowsparse.md
##########
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for Sparse Gradient Updates
+
+## Motivation
+
+Many real world datasets deal with high dimensional sparse feature vectors.
When learning
+the weights of models with sparse datasets, the derived gradients of the
weights could be sparse.
+For example, let's say we learn a linear model ``Y = XW + b``, where ``X`` are
sparse feature vectors:
+
+
+```python
+import mxnet as mx
+shape = (3, 10)
+# `X` only contains 4 non-zeros
+data = [6, 7, 8, 9]
+indptr = [0, 2, 3, 4]
+indices = [0, 4, 0, 0]
+X = mx.nd.sparse.csr_matrix(data, indptr, indices, shape)
+# the content of `X`
+X.asnumpy()
+```
+
+For some columns in `X`, they do not have any non-zero value, therefore the
gradient for the weight `W` will have many row slices of all zeros
corresponding to the zero columns in `X`.
+
+
+```python
+W = mx.nd.random_uniform(shape=(10, 2))
+b = mx.nd.zeros((3, 1))
+# attach a gradient placeholder for W
+W.attach_grad(stype='row_sparse')
+with mx.autograd.record():
+ Y = mx.nd.dot(X, W) + b
+
+Y.backward()
+# the content of the gradients of `W`
+{'W.grad': W.grad, 'W.grad.asnumpy()': W.grad.asnumpy()}
+```
+
+Storing and manipulating such sparse matrices with many row slices of all
zeros in the default dense structure results in wasted memory and processing on
the zeros. More importantly, many gradient based optimization methods such as
SGD, [AdaGrad](https://stanford.edu/~jduchi/projects/DuchiHaSi10_colt.pdf) and
[Adam](https://arxiv.org/pdf/1412.6980.pdf)
+take advantage of sparse gradients and prove to be efficient and effective.
+**In MXNet, the ``RowSparseNDArray`` stores the matrix in ``row sparse``
format, which is designed for arrays of which most row slices are all zeros.**
+In this tutorial, we will describe what the row sparse format is and how to
use RowSparseNDArray for sparse gradient updates in MXNet.
+
+## Prerequisites
+
+To complete this tutorial, we need:
+
+- MXNet. See the instructions for your operating system in [Setup and
Installation](http://mxnet.io/get_started/install.html)
+- [Jupyter](http://jupyter.org/)
+ ```
+ pip install jupyter
+ ```
+- Basic knowledge of NDArray in MXNet. See the detailed tutorial for NDArray
in [NDArray - Imperative tensor operations on
CPU/GPU](https://mxnet.incubator.apache.org/tutorials/basic/ndarray.html)
+- Understanding of [automatic differentiation with
autograd](http://gluon.mxnet.io/chapter01_crashcourse/autograd.html)
+- GPUs - A section of this tutorial uses GPUs. If you don't have GPUs on your
+machine, simply set the variable `gpu_device` (set in the GPUs section of this
+tutorial) to `mx.cpu()`
+
+## Row Sparse Format
+
+A RowSparseNDArray represents a multidimensional NDArray using two separate 1D
arrays:
+`data` and `indices`.
+
+- data: an NDArray of any dtype with shape `[D0, D1, ..., Dn]`.
+- indices: a 1D int64 NDArray with shape `[D0]` with values sorted in
ascending order.
+
+The ``indices`` array stores the indices of the row slices with non-zeros,
+while the values are stored in ``data`` array. The corresponding NDArray
`dense` represented by RowSparseNDArray `rsp` has
+
+``dense[rsp.indices[i], :, :, :, ...] = rsp.data[i, :, :, :, ...]``
+
+A RowSparseNDArray is typically used to represent non-zero row slices of a
large NDArray of shape [LARGE0, D1, .. , Dn] where LARGE0 >> D0 and most row
slices are zeros.
+
+Given this two-dimension matrix:
+
+
+```python
+[[ 1, 2, 3],
+ [ 0, 0, 0],
+ [ 4, 0, 5],
+ [ 0, 0, 0],
+ [ 0, 0, 0]]
+```
+
+The row sparse representation would be:
+- `data` array holds all the non-zero row slices of the array.
+- `indices` array stores the row index for each row slice with non-zero
elements.
+
+
+
+```python
+data = [[1, 2, 3], [4, 0, 5]]
+indices = [0, 2]
+```
+
+`RowSparseNDArray` supports multidimensional arrays. Given this 3D tensor:
+
+
+```python
+[[[1, 0],
+ [0, 2],
+ [3, 4]],
+
+ [[5, 0],
+ [6, 0],
+ [0, 0]],
+
+ [[0, 0],
+ [0, 0],
+ [0, 0]]]
+```
+
+The row sparse representation would be (with `data` and `indices` defined the
same as above):
+
+
+```python
+data = [[[1, 0], [0, 2], [3, 4]], [[5, 0], [6, 0], [0, 0]]]
+indices = [0, 1]
+```
+
+``RowSparseNDArray`` is a subclass of ``NDArray``. If you query **stype** of a
RowSparseNDArray,
+the value will be **"row_sparse"**.
Review comment:
Since this is called "row_sparse", wouldn't it be nice if we had called
CSR's stype as "compressed_sparse" for ease in understanding, i.e.
"compressed_sparse" for CSR and "row_sparse" for RSP? :-)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services