aaronmarkham commented on a change in pull request #15353: [MXNET-1358]Fit api tutorial URL: https://github.com/apache/incubator-mxnet/pull/15353#discussion_r297448022
########## File path: docs/tutorials/gluon/fit_api_tutorial.md ########## @@ -0,0 +1,265 @@ +<!--- Licensed to the Apache Software Foundation (ASF) under one --> +<!--- or more contributor license agreements. See the NOTICE file --> +<!--- distributed with this work for additional information --> +<!--- regarding copyright ownership. The ASF licenses this file --> +<!--- to you under the Apache License, Version 2.0 (the --> +<!--- "License"); you may not use this file except in compliance --> +<!--- with the License. You may obtain a copy of the License at --> + +<!--- http://www.apache.org/licenses/LICENSE-2.0 --> + +<!--- Unless required by applicable law or agreed to in writing, --> +<!--- software distributed under the License is distributed on an --> +<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY --> +<!--- KIND, either express or implied. See the License for the --> +<!--- specific language governing permissions and limitations --> +<!--- under the License. --> + + +# MXNet Gluon Fit API + +In this tutorial, we will see how to use the [Gluon Fit API](https://cwiki.apache.org/confluence/display/MXNET/Gluon+Fit+API+-+Tech+Design) which is the easiest way to train deep learning models using the [Gluon API](http://mxnet.incubator.apache.org/versions/master/gluon/index.html) in Apache MXNet. + +With the Fit API, you can train a deep learning model with miminal amount of code. Just specify the network, loss function and the data you want to train on. You don't need to worry about the boiler plate code to loop through the dataset in batches(often called as 'training loop'). Advanced users can still do this for bespoke training loops, but most use cases will be covered by the Fit API. + +To demonstrate the Fit API, this tutorial will train an Image Classification model using the [ResNet-18](https://arxiv.org/abs/1512.03385) architecture for the neural network. The model will be trained using the [Fashion-MNIST dataset](https://research.zalando.com/welcome/mission/research-projects/fashion-mnist/). + +## Prerequisites + +To complete this tutorial, you will need: + +- [MXNet](https://mxnet.incubator.apache.org/install/#overview) (The version of MXNet will be >= 1.5.0) +- [Jupyter Notebook](https://jupyter.org/index.html) (For interactively running the provided .ipynb file) + + + + +```python +import mxnet as mx +from mxnet import gluon +from mxnet.gluon.model_zoo import vision +from mxnet.gluon.contrib.estimator import estimator +from mxnet.gluon.contrib.estimator.event_handler import TrainBegin, TrainEnd, EpochEnd, CheckpointHandler + +gpu_count = mx.context.num_gpus() +ctx = [mx.gpu(i) for i in range(gpu_count)] if gpu_count > 0 else mx.cpu() +mx.random.seed(7) # Set a fixed seed +``` + +## Dataset + +[Fashion-MNIST](https://research.zalando.com/welcome/mission/research-projects/fashion-mnist/) dataset consists of fashion items divided into ten categories: t-shirt/top, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag and ankle boot. + +- It has 60,000 gray scale images of size 28 * 28 for training. +- It has 10,000 gray scale images os size 28 * 28 for testing/validation. + +We will use the ```gluon.data.vision``` package to directly import the Fashion-MNIST dataset and perform pre-processing on it. + + +```python +# Get the training data +fashion_mnist_train = gluon.data.vision.FashionMNIST(train=True) + +# Get the validation data +fashion_mnist_val = gluon.data.vision.FashionMNIST(train=False) +``` + + +```python +transforms = [gluon.data.vision.transforms.Resize(224), # We pick 224 as the model we use takes an input of size 224. + gluon.data.vision.transforms.ToTensor()] + +# Now we will stack all these together. +transforms = gluon.data.vision.transforms.Compose(transforms) +``` + + +```python +# Apply the transformations +fashion_mnist_train = fashion_mnist_train.transform_first(transforms) +fashion_mnist_val = fashion_mnist_val.transform_first(transforms) +``` + + +```python +batch_size = 256 # Batch size of the images +num_workers = 4 # The number of parallel workers for loading the data using Data Loaders. + +train_data_loader = gluon.data.DataLoader(fashion_mnist_train, batch_size=batch_size, + shuffle=True, num_workers=num_workers) +val_data_loader = gluon.data.DataLoader(fashion_mnist_val, batch_size=batch_size, + shuffle=False, num_workers=num_workers) +``` + +## Model and Optimizers + +Let's load the resnet-18 model architecture from [Gluon Model Zoo](http://mxnet.apache.org/api/python/gluon/model_zoo.html) and initialize it's parameters. The Gluon Model Zoo contains a repository of pre-trained models as well the model architecture definitions. We are using the model architecture from the model zoo in order to train it from scratch. + + +```python +resnet_18_v1 = vision.resnet18_v1(pretrained=False, classes = 10) +resnet_18_v1.initialize(init = mx.init.Xavier(), ctx=ctx) +``` + +We will be using ```SoftmaxCrossEntropyLoss``` as the loss function since this is a multi-class classification problem. We will be using ```sgd``` (Stochastic Gradient Descent) as the optimizer. You can experiment with a different optimizer as well. Review comment: ```suggestion We will be using `SoftmaxCrossEntropyLoss` as the loss function since this is a multi-class classification problem. We will be using `sgd` (Stochastic Gradient Descent) as the optimizer. You can experiment with a different optimizer as well. ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
