This is an automated email from the ASF dual-hosted git repository. zhasheng pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push: new af0c3b4 minor spelling tweaks for docs (#9834) af0c3b4 is described below commit af0c3b4e9bcb41734375470f965bba3c5731b1d0 Author: brett koonce <koo...@hello.com> AuthorDate: Mon Feb 19 18:41:16 2018 -0800 minor spelling tweaks for docs (#9834) --- docs/tutorials/basic/data.md | 2 +- docs/tutorials/basic/image_io.md | 2 +- docs/tutorials/basic/record_io.md | 4 ++-- docs/tutorials/gluon/customop.md | 2 +- docs/tutorials/gluon/mnist.md | 4 ++-- docs/tutorials/python/mnist.md | 2 +- docs/tutorials/sparse/csr.md | 2 +- 7 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/tutorials/basic/data.md b/docs/tutorials/basic/data.md index 1a88242..54ee334 100644 --- a/docs/tutorials/basic/data.md +++ b/docs/tutorials/basic/data.md @@ -416,7 +416,7 @@ data_iter = mx.io.ImageRecordIter( data_shape=(3, 227, 227), # output data shape. An 227x227 region will be cropped from the original image. batch_size=4, # number of samples per batch resize=256 # resize the shorter edge to 256 before cropping - # ... you can add more augumentation options as defined in ImageRecordIter. + # ... you can add more augmentation options as defined in ImageRecordIter. ) data_iter.reset() batch = data_iter.next() diff --git a/docs/tutorials/basic/image_io.md b/docs/tutorials/basic/image_io.md index 8d60ee8..092affb 100644 --- a/docs/tutorials/basic/image_io.md +++ b/docs/tutorials/basic/image_io.md @@ -85,7 +85,7 @@ data_iter = mx.io.ImageRecordIter( data_shape=(3, 227, 227), # output data shape. An 227x227 region will be cropped from the original image. batch_size=4, # number of samples per batch resize=256 # resize the shorter edge to 256 before cropping - # ... you can add more augumentation options here. use help(mx.io.ImageRecordIter) to see all possible choices + # ... you can add more augmentation options here. use help(mx.io.ImageRecordIter) to see all possible choices ) data_iter.reset() batch = data_iter.next() diff --git a/docs/tutorials/basic/record_io.md b/docs/tutorials/basic/record_io.md index e415d94..9ba6fa6 100644 --- a/docs/tutorials/basic/record_io.md +++ b/docs/tutorials/basic/record_io.md @@ -2,7 +2,7 @@ This tutorial will walk through the python interface for reading and writing record io files. It can be useful when you need more more control over the -details of data pipeline. For example, when you need to augument image and label +details of data pipeline. For example, when you need to augment image and label together for detection and segmentation, or when you need a custom data iterator for triplet sampling and negative sampling. @@ -16,7 +16,7 @@ import numpy as np import matplotlib.pyplot as plt ``` -The relevent code is under `mx.recordio`. There are two classes: `MXRecordIO`, +The relevant code is under `mx.recordio`. There are two classes: `MXRecordIO`, which supports sequential read and write, and `MXIndexedRecordIO`, which supports random read and sequential write. diff --git a/docs/tutorials/gluon/customop.md b/docs/tutorials/gluon/customop.md index dbb1907..e10f398 100644 --- a/docs/tutorials/gluon/customop.md +++ b/docs/tutorials/gluon/customop.md @@ -171,7 +171,7 @@ class DenseProp(mx.operator.CustomOpProp): ### Use CustomOp together with Block -Parameterized CustomOp are ususally used together with Blocks, which holds the parameter. +Parameterized CustomOp are usually used together with Blocks, which holds the parameter. ```python diff --git a/docs/tutorials/gluon/mnist.md b/docs/tutorials/gluon/mnist.md index 0bd616c..fc22719 100644 --- a/docs/tutorials/gluon/mnist.md +++ b/docs/tutorials/gluon/mnist.md @@ -50,7 +50,7 @@ val_data = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size ## Approaches -We will cover a couple of approaches for performing the hand written digit recognition task. The first approach makes use of a traditional deep neural network architecture called Multilayer Percepton (MLP). We'll discuss its drawbacks and use that as a motivation to introduce a second more advanced approach called Convolution Neural Network (CNN) that has proven to work very well for image classification tasks. +We will cover a couple of approaches for performing the hand written digit recognition task. The first approach makes use of a traditional deep neural network architecture called Multilayer Perceptron (MLP). We'll discuss its drawbacks and use that as a motivation to introduce a second more advanced approach called Convolution Neural Network (CNN) that has proven to work very well for image classification tasks. Now, let's import required nn modules @@ -142,7 +142,7 @@ for i in range(epoch): z = net(x) # Computes softmax cross entropy loss. loss = gluon.loss.softmax_cross_entropy_loss(z, y) - # Backpropogate the error for one iteration. + # Backpropagate the error for one iteration. ag.backward([loss]) outputs.append(z) # Updates internal evaluation diff --git a/docs/tutorials/python/mnist.md b/docs/tutorials/python/mnist.md index 067ded9..e408ead 100644 --- a/docs/tutorials/python/mnist.md +++ b/docs/tutorials/python/mnist.md @@ -44,7 +44,7 @@ val_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size ``` ## Training -We will cover a couple of approaches for performing the hand written digit recognition task. The first approach makes use of a traditional deep neural network architecture called Multilayer Percepton (MLP). We'll discuss its drawbacks and use that as a motivation to introduce a second more advanced approach called Convolution Neural Network (CNN) that has proven to work very well for image classification tasks. +We will cover a couple of approaches for performing the hand written digit recognition task. The first approach makes use of a traditional deep neural network architecture called Multilayer Perceptron (MLP). We'll discuss its drawbacks and use that as a motivation to introduce a second more advanced approach called Convolution Neural Network (CNN) that has proven to work very well for image classification tasks. ### Multilayer Perceptron diff --git a/docs/tutorials/sparse/csr.md b/docs/tutorials/sparse/csr.md index bbe71ff..e66b10d 100644 --- a/docs/tutorials/sparse/csr.md +++ b/docs/tutorials/sparse/csr.md @@ -13,7 +13,7 @@ For matrices of high sparsity (e.g. ~1% non-zeros = ~1% density), there are two - memory consumption is reduced significantly - certain operations are much faster (e.g. matrix-vector multiplication) -You may be familiar with the CSR storage format in [SciPy](https://www.scipy.org/) and will note the similarities in MXNet's implementation. However there are some additional competitive features in `CSRNDArray` inherited from `NDArray`, such as non-blocking asynchronous evaluation and automatic parallelization that are not available in SciPy's flavor of CSR. You can find further explainations for evaluation and parallization strategy in MXNet in the [NDArray tutorial](https://mxnet.incu [...] +You may be familiar with the CSR storage format in [SciPy](https://www.scipy.org/) and will note the similarities in MXNet's implementation. However there are some additional competitive features in `CSRNDArray` inherited from `NDArray`, such as non-blocking asynchronous evaluation and automatic parallelization that are not available in SciPy's flavor of CSR. You can find further explanations for evaluation and parallelization strategy in MXNet in the [NDArray tutorial](https://mxnet.inc [...] The introduction of `CSRNDArray` also brings a new attribute, `stype` as a holder for storage type info, to `NDArray`. You can query **ndarray.stype** now in addition to the oft-queried attributes such as **ndarray.shape**, **ndarray.dtype**, and **ndarray.context**. For a typical dense NDArray, the value of `stype` is **"default"**. For a `CSRNDArray`, the value of stype is **"csr"**. -- To stop receiving notification emails like this one, please contact zhash...@apache.org.