thomelane commented on a change in pull request #15427: [TUTORIAL] Gluon 
performance tips and tricks
URL: https://github.com/apache/incubator-mxnet/pull/15427#discussion_r299722411
 
 

 ##########
 File path: docs/tutorials/gluon/performance.md
 ##########
 @@ -0,0 +1,483 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+# Gluon Performance Tips & Tricks
+
+Compared to traditional machine learning methods, the field of deep-learning 
has increased model accuracy across a wide range of tasks, but it has also 
increased the amount of computation required for model training and inference. 
Specialised hardware chips, such as GPUs and FPGAs, can speed up the execution 
of networks, but it can sometimes be hard to write code that uses the hardware 
to its full potential. We will be looking at a few simple tips and trick in 
this tutorial that you can use to speed up training and ultimately save on 
training costs.
+
+We'll start by writing some code to train an image classification network for 
the CIFAR-10 dataset, and then benchmark the throughput of the network in terms 
of samples processed per second. After some performance analysis, we'll 
identify the bottlenecks (i.e. the components limiting throughput) and improve 
the training speed step-by-step. We'll bring together all the tips and tricks 
at the end and evaluate our performance gains.
+
+
+```python
+from __future__ import print_function
+import multiprocessing
+import time
+import mxnet as mx
+import numpy as np
+```
+
+An Amazon EC2 p3.2xlarge instance was used to benchmark the code in this 
tutorial. You are likely to get difference results and find different 
bottlenecks on other hardware, but these tips and tricks should still help 
improve training speed for bottleneck components. A GPU is recommended for this 
example.
+
+
+```python
+ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
+print("Using {} context.".format(ctx))
+```
+
+    Using gpu(0) context.
+
+
+We'll use the `CIFAR10` dataset provided out-of-the-box with Gluon.
+
+
+```python
+dataset = mx.gluon.data.vision.CIFAR10(train=True)
+print('{} samples'.format(len(dataset)))
+```
+
+    50000 samples
+
+
+So we can learn how to identify training bottlenecks, let's intentionally 
introduce a short `sleep` into the data loading pipeline. We transform each 
32x32 CIFAR-10 image to 244x244 so we can use it with the ResNet-50 network 
designed for ImageNet. [CIFAR-10 specific ResNet 
networks](https://gluon-cv.mxnet.io/api/model_zoo.html#gluoncv.model_zoo.get_cifar_resnet)
 exist but we use the more standard ImageNet variants in this example.
 
 Review comment:
   Good catch. Changed, rerun notebook and updated stats.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to