szha closed pull request #9834: minor spelling tweaks for docs
URL: https://github.com/apache/incubator-mxnet/pull/9834
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/basic/data.md b/docs/tutorials/basic/data.md
index 1a88242592..54ee334f97 100644
--- a/docs/tutorials/basic/data.md
+++ b/docs/tutorials/basic/data.md
@@ -416,7 +416,7 @@ data_iter = mx.io.ImageRecordIter(
     data_shape=(3, 227, 227), # output data shape. An 227x227 region will be 
cropped from the original image.
     batch_size=4, # number of samples per batch
     resize=256 # resize the shorter edge to 256 before cropping
-    # ... you can add more augumentation options as defined in ImageRecordIter.
+    # ... you can add more augmentation options as defined in ImageRecordIter.
     )
 data_iter.reset()
 batch = data_iter.next()
diff --git a/docs/tutorials/basic/image_io.md b/docs/tutorials/basic/image_io.md
index 8d60ee8fc0..092affbc74 100644
--- a/docs/tutorials/basic/image_io.md
+++ b/docs/tutorials/basic/image_io.md
@@ -85,7 +85,7 @@ data_iter = mx.io.ImageRecordIter(
     data_shape=(3, 227, 227), # output data shape. An 227x227 region will be 
cropped from the original image.
     batch_size=4, # number of samples per batch
     resize=256 # resize the shorter edge to 256 before cropping
-    # ... you can add more augumentation options here. use 
help(mx.io.ImageRecordIter) to see all possible choices
+    # ... you can add more augmentation options here. use 
help(mx.io.ImageRecordIter) to see all possible choices
     )
 data_iter.reset()
 batch = data_iter.next()
diff --git a/docs/tutorials/basic/record_io.md 
b/docs/tutorials/basic/record_io.md
index e415d9448b..9ba6fa6e25 100644
--- a/docs/tutorials/basic/record_io.md
+++ b/docs/tutorials/basic/record_io.md
@@ -2,7 +2,7 @@
 
 This tutorial will walk through the python interface for reading and writing
 record io files. It can be useful when you need more more control over the
-details of data pipeline. For example, when you need to augument image and 
label
+details of data pipeline. For example, when you need to augment image and label
 together for detection and segmentation, or when you need a custom data 
iterator
 for triplet sampling and negative sampling.
 
@@ -16,7 +16,7 @@ import numpy as np
 import matplotlib.pyplot as plt
 ```
 
-The relevent code is under `mx.recordio`. There are two classes: `MXRecordIO`,
+The relevant code is under `mx.recordio`. There are two classes: `MXRecordIO`,
 which supports sequential read and write, and `MXIndexedRecordIO`, which
 supports random read and sequential write.
 
diff --git a/docs/tutorials/gluon/customop.md b/docs/tutorials/gluon/customop.md
index dbb1907bad..e10f3987ee 100644
--- a/docs/tutorials/gluon/customop.md
+++ b/docs/tutorials/gluon/customop.md
@@ -171,7 +171,7 @@ class DenseProp(mx.operator.CustomOpProp):
 
 ### Use CustomOp together with Block
 
-Parameterized CustomOp are ususally used together with Blocks, which holds the 
parameter.
+Parameterized CustomOp are usually used together with Blocks, which holds the 
parameter.
 
 
 ```python
diff --git a/docs/tutorials/gluon/mnist.md b/docs/tutorials/gluon/mnist.md
index 0bd616c369..fc2271999f 100644
--- a/docs/tutorials/gluon/mnist.md
+++ b/docs/tutorials/gluon/mnist.md
@@ -50,7 +50,7 @@ val_data = mx.io.NDArrayIter(mnist['test_data'], 
mnist['test_label'], batch_size
 
 ## Approaches
 
-We will cover a couple of approaches for performing the hand written digit 
recognition task. The first approach makes use of a traditional deep neural 
network architecture called Multilayer Percepton (MLP). We'll discuss its 
drawbacks and use that as a motivation to introduce a second more advanced 
approach called Convolution Neural Network (CNN) that has proven to work very 
well for image classification tasks.
+We will cover a couple of approaches for performing the hand written digit 
recognition task. The first approach makes use of a traditional deep neural 
network architecture called Multilayer Perceptron (MLP). We'll discuss its 
drawbacks and use that as a motivation to introduce a second more advanced 
approach called Convolution Neural Network (CNN) that has proven to work very 
well for image classification tasks.
 
 Now, let's import required nn modules
 
@@ -142,7 +142,7 @@ for i in range(epoch):
                 z = net(x)
                 # Computes softmax cross entropy loss.
                 loss = gluon.loss.softmax_cross_entropy_loss(z, y)
-                # Backpropogate the error for one iteration.
+                # Backpropagate the error for one iteration.
                 ag.backward([loss])
                 outputs.append(z)
         # Updates internal evaluation
diff --git a/docs/tutorials/python/mnist.md b/docs/tutorials/python/mnist.md
index 067ded96ab..e408ead5ae 100644
--- a/docs/tutorials/python/mnist.md
+++ b/docs/tutorials/python/mnist.md
@@ -44,7 +44,7 @@ val_iter = mx.io.NDArrayIter(mnist['test_data'], 
mnist['test_label'], batch_size
 ```
 
 ## Training
-We will cover a couple of approaches for performing the hand written digit 
recognition task. The first approach makes use of a traditional deep neural 
network architecture called Multilayer Percepton (MLP). We'll discuss its 
drawbacks and use that as a motivation to introduce a second more advanced 
approach called Convolution Neural Network (CNN) that has proven to work very 
well for image classification tasks.
+We will cover a couple of approaches for performing the hand written digit 
recognition task. The first approach makes use of a traditional deep neural 
network architecture called Multilayer Perceptron (MLP). We'll discuss its 
drawbacks and use that as a motivation to introduce a second more advanced 
approach called Convolution Neural Network (CNN) that has proven to work very 
well for image classification tasks.
 
 ### Multilayer Perceptron
 
diff --git a/docs/tutorials/sparse/csr.md b/docs/tutorials/sparse/csr.md
index bbe71ff40c..e66b10d998 100644
--- a/docs/tutorials/sparse/csr.md
+++ b/docs/tutorials/sparse/csr.md
@@ -13,7 +13,7 @@ For matrices of high sparsity (e.g. ~1% non-zeros = ~1% 
density), there are two
 - memory consumption is reduced significantly
 - certain operations are much faster (e.g. matrix-vector multiplication)
 
-You may be familiar with the CSR storage format in 
[SciPy](https://www.scipy.org/) and will note the similarities in MXNet's 
implementation. However there are some additional competitive features in 
`CSRNDArray` inherited from `NDArray`, such as non-blocking asynchronous 
evaluation and automatic parallelization that are not available in SciPy's 
flavor of CSR. You can find further explainations for evaluation and 
parallization strategy in MXNet in the [NDArray 
tutorial](https://mxnet.incubator.apache.org/tutorials/basic/ndarray.html#lazy-evaluation-and-automatic-parallelization).
+You may be familiar with the CSR storage format in 
[SciPy](https://www.scipy.org/) and will note the similarities in MXNet's 
implementation. However there are some additional competitive features in 
`CSRNDArray` inherited from `NDArray`, such as non-blocking asynchronous 
evaluation and automatic parallelization that are not available in SciPy's 
flavor of CSR. You can find further explanations for evaluation and 
parallelization strategy in MXNet in the [NDArray 
tutorial](https://mxnet.incubator.apache.org/tutorials/basic/ndarray.html#lazy-evaluation-and-automatic-parallelization).
 
 The introduction of `CSRNDArray` also brings a new attribute, `stype` as a 
holder for storage type info, to `NDArray`. You can query **ndarray.stype** now 
in addition to the oft-queried attributes such as **ndarray.shape**, 
**ndarray.dtype**, and **ndarray.context**. For a typical dense NDArray, the 
value of `stype` is **"default"**. For a `CSRNDArray`, the value of stype is 
**"csr"**.
 


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to