This is an automated email from the ASF dual-hosted git repository.

jevans pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
     new 7748ae7edf docs: Fix a few typos (#21094)
7748ae7edf is described below

commit 7748ae7edf4d48908b70a98c0f3ea24e4853fd61
Author: Tim Gates <[email protected]>
AuthorDate: Wed Aug 17 04:43:51 2022 +1000

    docs: Fix a few typos (#21094)
    
    There are small typos in:
    - 
docs/python_docs/python/tutorials/getting-started/crash-course/7-use-gpus.md
    - 
docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md
    - example/README.md
    
    Fixes:
    - Should read `specifying` rather than `specifieing`.
    - Should read `multi threaded` rather than `iultithreaded`.
    - Should read `provisioning` rather than `provisionning`.
---
 .../python/tutorials/getting-started/crash-course/7-use-gpus.md         | 2 +-
 .../src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md        | 2 +-
 example/README.md                                                       | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/docs/python_docs/python/tutorials/getting-started/crash-course/7-use-gpus.md 
b/docs/python_docs/python/tutorials/getting-started/crash-course/7-use-gpus.md
index 0922cd79d9..d112b8fb74 100644
--- 
a/docs/python_docs/python/tutorials/getting-started/crash-course/7-use-gpus.md
+++ 
b/docs/python_docs/python/tutorials/getting-started/crash-course/7-use-gpus.md
@@ -36,7 +36,7 @@ npx.num_gpus() #This command provides the number of GPUs 
MXNet can access
 
 ## Allocate data to a GPU
 
-MXNet's ndarray is very similar to NumPy's. One major difference is that 
MXNet's ndarray has a `device` attribute specifieing which device an array is 
on. By default, arrays are stored on `npx.cpu()`. To change it to the first 
GPU, you can use the following code, `npx.gpu()` or `npx.gpu(0)` to indicate 
the first GPU.
+MXNet's ndarray is very similar to NumPy's. One major difference is that 
MXNet's ndarray has a `device` attribute specifying which device an array is 
on. By default, arrays are stored on `npx.cpu()`. To change it to the first 
GPU, you can use the following code, `npx.gpu()` or `npx.gpu(0)` to indicate 
the first GPU.
 
 ```{.python .input}
 gpu = npx.gpu() if npx.num_gpus() > 0 else npx.cpu()
diff --git 
a/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md 
b/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md
index 89fbfaee09..48392533e4 100644
--- 
a/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md
+++ 
b/docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md
@@ -30,7 +30,7 @@ A long standing request from MXNet users has been to invoke 
parallel inference o
 With this use case in mind, the threadsafe version of CachedOp was added to 
provide a way for customers to do multi-threaded inference for MXNet users.
 This doc attempts to do the following:
 1. Discuss the current state of thread safety in MXNet
-2. Explain how one can use C API and thread safe version of cached op, along 
with CPP package to achieve iultithreaded inference. This will be useful for 
end users as well as frontend developers of different language bindings
+2. Explain how one can use C API and thread safe version of cached op, along 
with CPP package to achieve multi threaded inference. This will be useful for 
end users as well as frontend developers of different language bindings
 3. Discuss the limitations of the above approach
 4. Future Work
 
diff --git a/example/README.md b/example/README.md
index 4e9023aeca..36f110a619 100644
--- a/example/README.md
+++ b/example/README.md
@@ -81,7 +81,7 @@ As part of making sure all our tutorials are running 
correctly with the latest v
 
 Add your own test here `tests/tutorials/test_tutorials.py`. (If you forget, 
don't worry your PR will not pass the sanity check).
 
-If your tutorial depends on specific packages, simply add them to this 
provisionning script: `ci/docker/install/ubuntu_tutorials.sh`
+If your tutorial depends on specific packages, simply add them to this 
provisioning script: `ci/docker/install/ubuntu_tutorials.sh`
 
 ## <a name="list-of-examples"></a>List of examples
 

Reply via email to