Author: wangwei
Date: Thu Mar 23 08:25:35 2017
New Revision: 1788191
URL: http://svn.apache.org/viewvc?rev=1788191&view=rev
Log:
update the docs by jenkins for commit 1e30a8b
Added:
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/caffe/README.md.txt
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/char-rnn/README.md.txt
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/cifar10/README.md.txt
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/alexnet/README.md.txt
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/googlenet/README.md.txt
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/resnet/README.md.txt
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/index.rst.txt
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/mnist/README.md.txt
Modified:
incubator/singa/site/trunk/en/_sources/docs/installation.md.txt
incubator/singa/site/trunk/en/docs/installation.html
incubator/singa/site/trunk/en/docs/model_zoo/examples/caffe/README.html
incubator/singa/site/trunk/en/docs/model_zoo/examples/char-rnn/README.html
incubator/singa/site/trunk/en/docs/model_zoo/examples/cifar10/README.html
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/alexnet/README.html
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/googlenet/README.html
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/resnet/README.html
incubator/singa/site/trunk/en/docs/model_zoo/examples/index.html
incubator/singa/site/trunk/en/docs/model_zoo/examples/mnist/README.html
incubator/singa/site/trunk/en/objects.inv
incubator/singa/site/trunk/en/searchindex.js
Modified: incubator/singa/site/trunk/en/_sources/docs/installation.md.txt
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/_sources/docs/installation.md.txt?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/_sources/docs/installation.md.txt (original)
+++ incubator/singa/site/trunk/en/_sources/docs/installation.md.txt Thu Mar 23
08:25:35 2017
@@ -9,11 +9,11 @@ Currently, SINGA has conda packages (Pyt
1. CPU only
- conda install -c nusdbsystem singa
+ conda install -c nusdbsystem singa
2. GPU via CUDA+cuDNN
- conda install -c nusdbsystem singa-cudax.y-cudnnz
+ conda install -c nusdbsystem singa-cudax.y-cudnnz
where `x.y,z` is one of <8.0, 5>, <7.5, 5> and <7.5, 4>.
Users need to install CUDA and cuDNN before installing SINGA.
@@ -46,28 +46,16 @@ The following Debian packages (on archit
<th>Link</th>
</tr>
<tr>
- <td>Ubuntu14.04</td>
+ <td>Ubuntu16.04</td>
<td>CPU</td>
<td>-</td>
- <td><a
href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/debian/latest/ubuntu14.04-cpp/python-singa.deb">latest</a>,
<a
href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/debian">history</a></td>
+ <td><a
href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/debian/latest/ubuntu16.04-cpp/python-singa-1.1.0.deb">latest</a>,
<a
href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/debian">history</a></td>
</tr>
<tr>
- <td>Ubuntu14.04</td>
- <td>GPU</td>
- <td>CUDA7.5+cuDNN4</td>
- <td>-</td>
- </tr>
- <tr>
- <td>Ubuntu14.04</td>
- <td>GPU</td>
- <td>CUDA7.5+cuDNN5</td>
- <td>-</td>
- </tr>
- <tr>
- <td>Ubuntu14.04</td>
+ <td>Ubuntu16.04</td>
<td>GPU</td>
<td>CUDA8.0+cuDNN5</td>
- <td>-</td>
+ <td><a
href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/debian/latest/ubuntu16.04-cpp/python-singa-cuda-1.1.0.deb">latest</a>,
<a
href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/debian">history</a></td>
</tr>
</table>
Added:
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/caffe/README.md.txt
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/caffe/README.md.txt?rev=1788191&view=auto
==============================================================================
---
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/caffe/README.md.txt
(added)
+++
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/caffe/README.md.txt
Thu Mar 23 08:25:35 2017
@@ -0,0 +1,32 @@
+# Use parameters pre-trained from Caffe in SINGA
+
+In this example, we use SINGA to load the VGG parameters trained by Caffe to
do image classification.
+
+## Run this example
+You can run this example by simply executing `run.sh vgg16` or `run.sh vgg19`
+The script does the following work.
+
+### Obtain the Caffe model
+* Download caffe model prototxt and parameter binary file.
+* Currently we only support the latest caffe format, if your model is in
+ previous version of caffe, please update it to current format.(This is
+ supported by caffe)
+* After updating, we can obtain two files, i.e., the prototxt and parameter
+ binary file.
+
+### Prepare test images
+A few sample images are downloaded into the `test` folder.
+
+### Predict
+The `predict.py` script creates the VGG model and read the parameters,
+
+ usage: predict.py [-h] model_txt model_bin imgclass
+
+where `imgclass` refers to the synsets of imagenet dataset for vgg models.
+You can start the prediction program by executing the following command:
+
+ python predict.py vgg16.prototxt vgg16.caffemodel synset_words.txt
+
+Then you type in the image path, and the program would output the top-5 labels.
+
+More Caffe models would be tested soon.
Added:
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/char-rnn/README.md.txt
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/char-rnn/README.md.txt?rev=1788191&view=auto
==============================================================================
---
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/char-rnn/README.md.txt
(added)
+++
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/char-rnn/README.md.txt
Thu Mar 23 08:25:35 2017
@@ -0,0 +1,33 @@
+# Train Char-RNN over plain text
+
+Recurrent neural networks (RNN) are widely used for modelling sequential data,
+e.g., natural language sentences. This example describes how to implement a RNN
+application (or model) using SINGA's RNN layers.
+We will use the [char-rnn](https://github.com/karpathy/char-rnn) model as an
+example, which trains over sentences or
+source code, with each character as an input unit. Particularly, we will train
+a RNN using GRU over Linux kernel source code. After training, we expect to
+generate meaningful code from the model.
+
+
+## Instructions
+
+* Compile and install SINGA. Currently the RNN implementation depends on Cudnn
with version >= 5.05.
+
+* Prepare the dataset. Download the [kernel source
code](http://cs.stanford.edu/people/karpathy/char-rnn/).
+Other plain text files can also be used.
+
+* Start the training,
+
+ python train.py linux_input.txt
+
+ Some hyper-parameters could be set through command line,
+
+ python train.py -h
+
+* Sample characters from the model by providing the number of characters to
sample and the seed string.
+
+ python sample.py 'model.bin' 100 --seed '#include <std'
+
+ Please replace 'model.bin' with the path to one of the checkpoint paths.
+
Added:
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/cifar10/README.md.txt
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/cifar10/README.md.txt?rev=1788191&view=auto
==============================================================================
---
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/cifar10/README.md.txt
(added)
+++
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/cifar10/README.md.txt
Thu Mar 23 08:25:35 2017
@@ -0,0 +1,76 @@
+# Train CNN over Cifar-10
+
+
+Convolution neural network (CNN) is a type of feed-forward artificial neural
+network widely used for image and video classification. In this example, we
+will train three deep CNN models to do image classification for the CIFAR-10
dataset,
+
+1.
[AlexNet](https://code.google.com/p/cuda-convnet/source/browse/trunk/example-layers/layers-18pct.cfg)
+the best validation accuracy (without data augmentation) we achieved was about
82%.
+
+2. [VGGNet](http://torch.ch/blog/2015/07/30/cifar.html), the best validation
accuracy (without data augmentation) we achieved was about 89%.
+3. [ResNet](https://github.com/facebook/fb.resnet.torch), the best validation
accuracy (without data augmentation) we achieved was about 83%.
+4. [Alexnet from
Caffe](https://github.com/BVLC/caffe/tree/master/examples/cifar10), SINGA is
able to convert model from Caffe seamlessly.
+
+
+## Instructions
+
+
+### SINGA installation
+
+Users can compile and install SINGA from source or install the Python version.
+The code can ran on both CPU and GPU. For GPU training, CUDA and CUDNN (V4 or
V5)
+are required. Please refer to the installation page for detailed instructions.
+
+### Data preparation
+
+The binary Cifar-10 dataset could be downloaded by
+
+ python download_data.py bin
+
+The Python version could be downloaded by
+
+ python download_data.py py
+
+### Training
+
+There are four training programs
+
+1. train.py. The following command would train the VGG model using the python
+version of the Cifar-10 dataset in 'cifar-10-batches-py' folder.
+
+ python train.py vgg cifar-10-batches-py
+
+ To train other models, please replace 'vgg' to 'alexnet', 'resnet' or
'caffe',
+ where 'caffe' refers to the alexnet model converted from Caffe. By default
+ the training would run on a CudaGPU device, to run it on CppCPU, add an
additional
+ argument
+
+ python train.py vgg cifar-10-batches-py --use_cpu
+
+2. alexnet.cc. It trains the AlexNet model using the CPP APIs on a CudaGPU,
+
+ ./run.sh
+
+3. alexnet-parallel.cc. It trains the AlexNet model using the CPP APIs on two
CudaGPU devices.
+The two devices run synchronously to compute the gradients of the mode
parameters, which are
+averaged on the host CPU device and then be applied to update the parameters.
+
+ ./run-parallel.sh
+
+4. vgg-parallel.cc. It trains the VGG model using the CPP APIs on two CudaGPU
devices similar to alexnet-parallel.cc.
+
+### Prediction
+
+predict.py includes the prediction function
+
+ def predict(net, images, dev, topk=5)
+
+The net is created by loading the previously trained model; Images consist of
+a numpy array of images (one row per image); dev is the training device, e.g.,
+a CudaGPU device or the host CppCPU device; It returns the topk labels for
each instance.
+
+The predict.py file's main function provides an example of using the
pre-trained alexnet model to do prediction for new images.
+The 'model.bin' file generated by the training program should be placed at the
cifar10 folder to run
+
+ python predict.py
Added:
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/alexnet/README.md.txt
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/alexnet/README.md.txt?rev=1788191&view=auto
==============================================================================
---
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/alexnet/README.md.txt
(added)
+++
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/alexnet/README.md.txt
Thu Mar 23 08:25:35 2017
@@ -0,0 +1,58 @@
+# Train AlexNet over ImageNet
+
+Convolution neural network (CNN) is a type of feed-forward neural
+network widely used for image and video classification. In this example, we
will
+use a [deep CNN
model](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)
+to do image classification against the ImageNet dataset.
+
+## Instructions
+
+### Compile SINGA
+
+Please compile SINGA with CUDA, CUDNN and OpenCV. You can manually turn on the
+options in CMakeLists.txt or run `ccmake ..` in build/ folder.
+
+We have tested CUDNN V4 and V5 (V5 requires CUDA 7.5)
+
+### Data download
+* Please refer to step1-3 on [Instructions to create ImageNet 2012
data](https://github.com/amd/OpenCL-caffe/wiki/Instructions-to-create-ImageNet-2012-data)
+ to download and decompress the data.
+* You can download the training and validation list by
+
[get_ilsvrc_aux.sh](https://github.com/BVLC/caffe/blob/master/data/ilsvrc12/get_ilsvrc_aux.sh)
+ or from [Imagenet](http://www.image-net.org/download-images).
+
+### Data preprocessing
+* Assuming you have downloaded the data and the list.
+ Now we should transform the data into binary files. You can run:
+
+ sh create_data.sh
+
+ The script will generate a test file(`test.bin`), a mean file(`mean.bin`) and
+ several training files(`trainX.bin`) in the specified output folder.
+* You can also change the parameters in `create_data.sh`.
+ + `-trainlist <file>`: the file of training list;
+ + `-trainfolder <folder>`: the folder of training images;
+ + `-testlist <file>`: the file of test list;
+ + `-testfolder <floder>`: the folder of test images;
+ + `-outdata <folder>`: the folder to save output files, including mean,
training and test files.
+ The script will generate these files in the specified folder;
+ + `-filesize <int>`: number of training images that stores in each binary
file.
+
+### Training
+* After preparing data, you can run the following command to train the Alexnet
model.
+
+ sh run.sh
+
+* You may change the parameters in `run.sh`.
+ + `-epoch <int>`: number of epoch to be trained, default is 90;
+ + `-lr <float>`: base learning rate, the learning rate will decrease each 20
epochs,
+ more specifically, `lr = lr * exp(0.1 * (epoch / 20))`;
+ + `-batchsize <int>`: batchsize, it should be changed regarding to your
memory;
+ + `-filesize <int>`: number of training images that stores in each binary
file, it is the
+ same as the `filesize` in data preprocessing;
+ + `-ntrain <int>`: number of training images;
+ + `-ntest <int>`: number of test images;
+ + `-data <folder>`: the folder which stores the binary files, it is exactly
the output
+ folder in data preprocessing step;
+ + `-pfreq <int>`: the frequency(in batch) of printing current model
status(loss and accuracy);
+ + `-nthreads <int>`: the number of threads to load data which feed to the
model.
Added:
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/googlenet/README.md.txt
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/googlenet/README.md.txt?rev=1788191&view=auto
==============================================================================
---
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/googlenet/README.md.txt
(added)
+++
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/googlenet/README.md.txt
Thu Mar 23 08:25:35 2017
@@ -0,0 +1,66 @@
+---
+name: GoogleNet on ImageNet
+SINGA version: 1.0.1
+SINGA commit: 8c990f7da2de220e8a012c6a8ecc897dc7532744
+parameter_url:
https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
+parameter_sha1: 0a88e8948b1abca3badfd8d090d6be03f8d7655d
+license: unrestricted
https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet
+---
+
+# Image Classification using GoogleNet
+
+
+In this example, we convert GoogleNet trained on Caffe to SINGA for image
classification.
+
+## Instructions
+
+* Download the parameter checkpoint file into this folder
+
+ $ wget
https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
+ $ tar xvf bvlc_googlenet.tar.gz
+
+* Run the program
+
+ # use cpu
+ $ python serve.py -C &
+ # use gpu
+ $ python serve.py &
+
+* Submit images for classification
+
+ $ curl -i -F [email protected] http://localhost:9999/api
+ $ curl -i -F [email protected] http://localhost:9999/api
+ $ curl -i -F [email protected] http://localhost:9999/api
+
+image1.jpg, image2.jpg and image3.jpg should be downloaded before executing
the above commands.
+
+## Details
+
+We first extract the parameter values from [Caffe's checkpoint
file](http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel) into a
pickle version
+After downloading the checkpoint file into `caffe_root/python` folder, run the
following script
+
+ # to be executed within caffe_root/python folder
+ import caffe
+ import numpy as np
+ import cPickle as pickle
+
+ model_def = '../models/bvlc_googlenet/deploy.prototxt'
+ weight = 'bvlc_googlenet.caffemodel' # must be downloaded at first
+ net = caffe.Net(model_def, weight, caffe.TEST)
+
+ params = {}
+ for layer_name in net.params.keys():
+ weights=np.copy(net.params[layer_name][0].data)
+ bias=np.copy(net.params[layer_name][1].data)
+ params[layer_name+'_weight']=weights
+ params[layer_name+'_bias']=bias
+ print layer_name, weights.shape, bias.shape
+
+ with open('bvlc_googlenet.pickle', 'wb') as fd:
+ pickle.dump(params, fd)
+
+Then we construct the GoogleNet using SINGA's FeedForwardNet structure.
+Note that we added a EndPadding layer to resolve the issue from discrepancy
+of the rounding strategy of the pooling layer between Caffe (ceil) and cuDNN
(floor).
+Only the MaxPooling layers outside inception blocks have this problem.
+Refer to
[this](http://joelouismarino.github.io/blog_posts/blog_googlenet_keras.html)
for more detials.
Added:
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/resnet/README.md.txt
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/resnet/README.md.txt?rev=1788191&view=auto
==============================================================================
---
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/resnet/README.md.txt
(added)
+++
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/imagenet/resnet/README.md.txt
Thu Mar 23 08:25:35 2017
@@ -0,0 +1,54 @@
+---
+name: Resnets on ImageNet
+SINGA version: 1.1
+SINGA commit: 45ec92d8ffc1fa1385a9307fdf07e21da939ee2f
+parameter_url:
https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
+license: Apache V2,
https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE
+---
+
+# Image Classification using Residual Networks
+
+
+In this example, we convert Residual Networks trained on
[Torch](https://github.com/facebook/fb.resnet.torch) to SINGA for image
classification.
+
+## Instructions
+
+* Download one parameter checkpoint file (see below) and the synset word file
of ImageNet into this folder, e.g.,
+
+ $ wget
https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
+ $ wget
https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/synset_words.txt
+ $ tar xvf resnet-18.tar.gz
+
+* Usage
+
+ $ python serve.py -h
+
+* Example
+
+ # use cpu
+ $ python serve.py --use_cpu --parameter_file resnet-18.pickle --model
resnet --depth 18 &
+ # use gpu
+ $ python serve.py --parameter_file resnet-18.pickle --model resnet
--depth 18 &
+
+ The parameter files for the following model and depth configuration pairs
are provided:
+ * resnet (original resnet),
[18](https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-101.tar.gz)|[34](https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-34.tar.gz)|[101](https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-101.tar.gz)|[152](https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-152.tar.gz)
+ * addbn (resnet with a batch normalization layer after the addition),
[50](https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-50.tar.gz)
+ * wrn (wide resnet),
[50](https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/wrn-50-2.tar.gz)
+ * preact (resnet with pre-activation)
[200](https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-200.tar.gz)
+
+* Submit images for classification
+
+ $ curl -i -F [email protected] http://localhost:9999/api
+ $ curl -i -F [email protected] http://localhost:9999/api
+ $ curl -i -F [email protected] http://localhost:9999/api
+
+image1.jpg, image2.jpg and image3.jpg should be downloaded before executing
the above commands.
+
+## Details
+
+The parameter files were extracted from the original [torch
files](https://github.com/facebook/fb.resnet.torch/tree/master/pretrained) via
+the convert.py program.
+
+Usage:
+
+ $ python convert.py -h
Added:
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/index.rst.txt
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/index.rst.txt?rev=1788191&view=auto
==============================================================================
---
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/index.rst.txt
(added)
+++
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/index.rst.txt
Thu Mar 23 08:25:35 2017
@@ -0,0 +1,29 @@
+..
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements. See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership. The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License. You may obtain a copy of the License at
+..
+.. http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+..
+
+Model Zoo
+=========
+
+.. toctree::
+
+ cifar10/README
+ char-rnn/README
+ imagenet/alexnet/README
+ imagenet/googlenet/README
+
+
Added:
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/mnist/README.md.txt
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/mnist/README.md.txt?rev=1788191&view=auto
==============================================================================
---
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/mnist/README.md.txt
(added)
+++
incubator/singa/site/trunk/en/_sources/docs/model_zoo/examples/mnist/README.md.txt
Thu Mar 23 08:25:35 2017
@@ -0,0 +1,18 @@
+# Train a RBM model against MNIST dataset
+
+This example is to train an RBM model using the
+MNIST dataset. The RBM model and its hyper-parameters are set following
+[Hinton's paper](http://www.cs.toronto.edu/~hinton/science.pdf)
+
+## Running instructions
+
+1. Download the pre-processed [MNIST
dataset](https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz)
+
+2. Start the training
+
+ python train.py mnist.pkl.gz
+
+By default the training code would run on CPU. To run it on a GPU card, please
start
+the program with an additional argument
+
+ python train.py mnist.pkl.gz --use_gpu
Modified: incubator/singa/site/trunk/en/docs/installation.html
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/installation.html?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/installation.html (original)
+++ incubator/singa/site/trunk/en/docs/installation.html Thu Mar 23 08:25:35
2017
@@ -196,10 +196,14 @@ Currently, SINGA has conda packages (Pyt
<span id="linux"></span><h3>Linux<a class="headerlink" href="#linux"
title="Permalink to this headline">¶</a></h3>
<ol>
<li><p class="first">CPU only</p>
-<p>conda install -c nusdbsystem singa</p>
+<div class="highlight-default"><div class="highlight"><pre><span></span> <span
class="n">conda</span> <span class="n">install</span> <span
class="o">-</span><span class="n">c</span> <span class="n">nusdbsystem</span>
<span class="n">singa</span>
+</pre></div>
+</div>
</li>
<li><p class="first">GPU via CUDA+cuDNN</p>
-<p>conda install -c nusdbsystem singa-cudax.y-cudnnz</p>
+<div class="highlight-default"><div class="highlight"><pre><span></span> <span
class="n">conda</span> <span class="n">install</span> <span
class="o">-</span><span class="n">c</span> <span class="n">nusdbsystem</span>
<span class="n">singa</span><span class="o">-</span><span
class="n">cudax</span><span class="o">.</span><span class="n">y</span><span
class="o">-</span><span class="n">cudnnz</span>
+</pre></div>
+</div>
<p>where <code class="docutils literal"><span class="pre">x.y,z</span></code>
is one of <8.0, 5>, <7.5, 5> and <7.5, 4>.
Users need to install CUDA and cuDNN before installing SINGA.
If cuDNN is not in system folders (e.g., /usr/local), export the folder of
libcudnn.so to LD_LIBRARY_PATH</p>
@@ -231,28 +235,16 @@ If cuDNN is not in system folders (e.g.,
<th>Link</th>
</tr>
<tr>
- <td>Ubuntu14.04</td>
+ <td>Ubuntu16.04</td>
<td>CPU</td>
<td>-</td>
- <td><a
href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/debian/latest/ubuntu14.04-cpp/python-singa.deb">latest</a>,
<a
href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/debian">history</a></td>
+ <td><a
href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/debian/latest/ubuntu16.04-cpp/python-singa-1.1.0.deb">latest</a>,
<a
href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/debian">history</a></td>
</tr>
<tr>
- <td>Ubuntu14.04</td>
- <td>GPU</td>
- <td>CUDA7.5+cuDNN4</td>
- <td>-</td>
- </tr>
- <tr>
- <td>Ubuntu14.04</td>
- <td>GPU</td>
- <td>CUDA7.5+cuDNN5</td>
- <td>-</td>
- </tr>
- <tr>
- <td>Ubuntu14.04</td>
+ <td>Ubuntu16.04</td>
<td>GPU</td>
<td>CUDA8.0+cuDNN5</td>
- <td>-</td>
+ <td><a
href="http://comp.nus.edu.sg/~dbsystem/singa/assets/file/debian/latest/ubuntu16.04-cpp/python-singa-cuda-1.1.0.deb">latest</a>,
<a
href="http://www.comp.nus.edu.sg/~dbsystem/singa/assets/file/debian">history</a></td>
</tr>
</table><p>Download the deb file and install it via</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span
class="n">apt</span><span class="o">-</span><span class="n">get</span> <span
class="n">install</span> <span class="o"><</span><span class="n">path</span>
<span class="n">to</span> <span class="n">the</span> <span class="n">deb</span>
<span class="n">file</span><span class="p">,</span> <span
class="n">e</span><span class="o">.</span><span class="n">g</span><span
class="o">.</span><span class="p">,</span> <span class="o">./</span><span
class="n">python</span><span class="o">-</span><span
class="n">singa</span><span class="o">.</span><span class="n">deb</span><span
class="o">></span>
Modified:
incubator/singa/site/trunk/en/docs/model_zoo/examples/caffe/README.html
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/model_zoo/examples/caffe/README.html?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/model_zoo/examples/caffe/README.html
(original)
+++ incubator/singa/site/trunk/en/docs/model_zoo/examples/caffe/README.html Thu
Mar 23 08:25:35 2017
@@ -31,6 +31,9 @@
+ <link rel="index" title="Index"
+ href="../../../../genindex.html"/>
+ <link rel="search" title="Search" href="../../../../search.html"/>
<link rel="top" title="incubator-singa 1.1.0 documentation"
href="../../../../index.html"/>
<link href="../../../../_static/style.css" rel="stylesheet"
type="text/css">
@@ -169,12 +172,12 @@ binary file.</li>
<div class="section" id="predict">
<span id="predict"></span><h3>Predict<a class="headerlink" href="#predict"
title="Permalink to this headline">¶</a></h3>
<p>The <code class="docutils literal"><span
class="pre">predict.py</span></code> script creates the VGG model and read the
parameters,</p>
-<div class="highlight-python"><div class="highlight"><pre>usage: predict.py
[-h] model_txt model_bin imgclass
+<div class="highlight-default"><div class="highlight"><pre><span></span><span
class="n">usage</span><span class="p">:</span> <span
class="n">predict</span><span class="o">.</span><span class="n">py</span> <span
class="p">[</span><span class="o">-</span><span class="n">h</span><span
class="p">]</span> <span class="n">model_txt</span> <span
class="n">model_bin</span> <span class="n">imgclass</span>
</pre></div>
</div>
<p>where <code class="docutils literal"><span
class="pre">imgclass</span></code> refers to the synsets of imagenet dataset
for vgg models.
You can start the prediction program by executing the following command:</p>
-<div class="highlight-python"><div class="highlight"><pre>python predict.py
vgg16.prototxt vgg16.caffemodel synset_words.txt
+<div class="highlight-default"><div class="highlight"><pre><span></span><span
class="n">python</span> <span class="n">predict</span><span
class="o">.</span><span class="n">py</span> <span class="n">vgg16</span><span
class="o">.</span><span class="n">prototxt</span> <span
class="n">vgg16</span><span class="o">.</span><span class="n">caffemodel</span>
<span class="n">synset_words</span><span class="o">.</span><span
class="n">txt</span>
</pre></div>
</div>
<p>Then you type in the image path, and the program would output the top-5
labels.</p>
Modified:
incubator/singa/site/trunk/en/docs/model_zoo/examples/char-rnn/README.html
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/model_zoo/examples/char-rnn/README.html?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/model_zoo/examples/char-rnn/README.html
(original)
+++ incubator/singa/site/trunk/en/docs/model_zoo/examples/char-rnn/README.html
Thu Mar 23 08:25:35 2017
@@ -31,6 +31,9 @@
+ <link rel="index" title="Index"
+ href="../../../../genindex.html"/>
+ <link rel="search" title="Search" href="../../../../search.html"/>
<link rel="top" title="incubator-singa 1.1.0 documentation"
href="../../../../index.html"/>
<link href="../../../../_static/style.css" rel="stylesheet"
type="text/css">
@@ -163,16 +166,16 @@ generate meaningful code from the model.
Other plain text files can also be used.</p>
</li>
<li><p class="first">Start the training,</p>
-<div class="highlight-python"><div class="highlight"><pre> python train.py
linux_input.txt
+<div class="highlight-default"><div class="highlight"><pre><span></span>
<span class="n">python</span> <span class="n">train</span><span
class="o">.</span><span class="n">py</span> <span
class="n">linux_input</span><span class="o">.</span><span class="n">txt</span>
</pre></div>
</div>
<p>Some hyper-parameters could be set through command line,</p>
-<div class="highlight-python"><div class="highlight"><pre> python train.py -h
+<div class="highlight-default"><div class="highlight"><pre><span></span>
<span class="n">python</span> <span class="n">train</span><span
class="o">.</span><span class="n">py</span> <span class="o">-</span><span
class="n">h</span>
</pre></div>
</div>
</li>
<li><p class="first">Sample characters from the model by providing the number
of characters to sample and the seed string.</p>
-<div class="highlight-python"><div class="highlight"><pre> python sample.py
'model.bin' 100 --seed '#include <std'
+<div class="highlight-default"><div class="highlight"><pre><span></span>
<span class="n">python</span> <span class="n">sample</span><span
class="o">.</span><span class="n">py</span> <span
class="s1">'model.bin'</span> <span class="mi">100</span> <span
class="o">--</span><span class="n">seed</span> <span class="s1">'#include
<std'</span>
</pre></div>
</div>
<p>Please replace ‘model.bin’ with the path to one of the
checkpoint paths.</p>
Modified:
incubator/singa/site/trunk/en/docs/model_zoo/examples/cifar10/README.html
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/model_zoo/examples/cifar10/README.html?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/model_zoo/examples/cifar10/README.html
(original)
+++ incubator/singa/site/trunk/en/docs/model_zoo/examples/cifar10/README.html
Thu Mar 23 08:25:35 2017
@@ -31,6 +31,9 @@
+ <link rel="index" title="Index"
+ href="../../../../genindex.html"/>
+ <link rel="search" title="Search" href="../../../../search.html"/>
<link rel="top" title="incubator-singa 1.1.0 documentation"
href="../../../../index.html"/>
<link href="../../../../_static/style.css" rel="stylesheet"
type="text/css">
@@ -167,11 +170,11 @@ are required. Please refer to the instal
<div class="section" id="data-preparation">
<span id="data-preparation"></span><h3>Data preparation<a class="headerlink"
href="#data-preparation" title="Permalink to this headline">¶</a></h3>
<p>The binary Cifar-10 dataset could be downloaded by</p>
-<div class="highlight-python"><div class="highlight"><pre>python
download_data.py bin
+<div class="highlight-default"><div class="highlight"><pre><span></span><span
class="n">python</span> <span class="n">download_data</span><span
class="o">.</span><span class="n">py</span> <span class="nb">bin</span>
</pre></div>
</div>
<p>The Python version could be downloaded by</p>
-<div class="highlight-python"><div class="highlight"><pre>python
download_data.py py
+<div class="highlight-default"><div class="highlight"><pre><span></span><span
class="n">python</span> <span class="n">download_data</span><span
class="o">.</span><span class="n">py</span> <span class="n">py</span>
</pre></div>
</div>
</div>
@@ -181,26 +184,26 @@ are required. Please refer to the instal
<ol>
<li><p class="first">train.py. The following command would train the VGG model
using the python
version of the Cifar-10 dataset in ‘cifar-10-batches-py’
folder.</p>
-<div class="highlight-python"><div class="highlight"><pre> python train.py vgg
cifar-10-batches-py
+<div class="highlight-default"><div class="highlight"><pre><span></span> <span
class="n">python</span> <span class="n">train</span><span
class="o">.</span><span class="n">py</span> <span class="n">vgg</span> <span
class="n">cifar</span><span class="o">-</span><span class="mi">10</span><span
class="o">-</span><span class="n">batches</span><span class="o">-</span><span
class="n">py</span>
</pre></div>
</div>
<p>To train other models, please replace ‘vgg’ to
‘alexnet’, ‘resnet’ or ‘caffe’,
where ‘caffe’ refers to the alexnet model converted from Caffe. By
default
the training would run on a CudaGPU device, to run it on CppCPU, add an
additional
argument</p>
-<div class="highlight-python"><div class="highlight"><pre> python train.py vgg
cifar-10-batches-py --use_cpu
+<div class="highlight-default"><div class="highlight"><pre><span></span> <span
class="n">python</span> <span class="n">train</span><span
class="o">.</span><span class="n">py</span> <span class="n">vgg</span> <span
class="n">cifar</span><span class="o">-</span><span class="mi">10</span><span
class="o">-</span><span class="n">batches</span><span class="o">-</span><span
class="n">py</span> <span class="o">--</span><span class="n">use_cpu</span>
</pre></div>
</div>
</li>
<li><p class="first">alexnet.cc. It trains the AlexNet model using the CPP
APIs on a CudaGPU,</p>
-<div class="highlight-python"><div class="highlight"><pre> ./run.sh
+<div class="highlight-default"><div class="highlight"><pre><span></span> <span
class="o">./</span><span class="n">run</span><span class="o">.</span><span
class="n">sh</span>
</pre></div>
</div>
</li>
<li><p class="first">alexnet-parallel.cc. It trains the AlexNet model using
the CPP APIs on two CudaGPU devices.
The two devices run synchronously to compute the gradients of the mode
parameters, which are
averaged on the host CPU device and then be applied to update the
parameters.</p>
-<div class="highlight-python"><div class="highlight"><pre> ./run-parallel.sh
+<div class="highlight-default"><div class="highlight"><pre><span></span> <span
class="o">./</span><span class="n">run</span><span class="o">-</span><span
class="n">parallel</span><span class="o">.</span><span class="n">sh</span>
</pre></div>
</div>
</li>
@@ -211,7 +214,7 @@ averaged on the host CPU device and then
<div class="section" id="prediction">
<span id="prediction"></span><h3>Prediction<a class="headerlink"
href="#prediction" title="Permalink to this headline">¶</a></h3>
<p>predict.py includes the prediction function</p>
-<div class="highlight-python"><div class="highlight"><pre> def predict(net,
images, dev, topk=5)
+<div class="highlight-default"><div class="highlight"><pre><span></span>
<span class="k">def</span> <span class="nf">predict</span><span
class="p">(</span><span class="n">net</span><span class="p">,</span> <span
class="n">images</span><span class="p">,</span> <span class="n">dev</span><span
class="p">,</span> <span class="n">topk</span><span class="o">=</span><span
class="mi">5</span><span class="p">)</span>
</pre></div>
</div>
<p>The net is created by loading the previously trained model; Images consist
of
@@ -219,7 +222,7 @@ a numpy array of images (one row per ima
a CudaGPU device or the host CppCPU device; It returns the topk labels for
each instance.</p>
<p>The predict.py file’s main function provides an example of using the
pre-trained alexnet model to do prediction for new images.
The ‘model.bin’ file generated by the training program should be
placed at the cifar10 folder to run</p>
-<div class="highlight-python"><div class="highlight"><pre> python predict.py
+<div class="highlight-default"><div class="highlight"><pre><span></span>
<span class="n">python</span> <span class="n">predict</span><span
class="o">.</span><span class="n">py</span>
</pre></div>
</div>
</div>
Modified:
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/alexnet/README.html
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/alexnet/README.html?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
---
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/alexnet/README.html
(original)
+++
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/alexnet/README.html
Thu Mar 23 08:25:35 2017
@@ -31,6 +31,9 @@
+ <link rel="index" title="Index"
+ href="../../../../../genindex.html"/>
+ <link rel="search" title="Search" href="../../../../../search.html"/>
<link rel="top" title="incubator-singa 1.1.0 documentation"
href="../../../../../index.html"/>
<link href="../../../../../_static/style.css" rel="stylesheet"
type="text/css">
@@ -173,7 +176,7 @@ or from <a class="reference external" hr
<ul>
<li><p class="first">Assuming you have downloaded the data and the list.
Now we should transform the data into binary files. You can run:</p>
-<div class="highlight-python"><div class="highlight"><pre> sh create_data.sh
+<div class="highlight-default"><div class="highlight"><pre><span></span>
<span class="n">sh</span> <span class="n">create_data</span><span
class="o">.</span><span class="n">sh</span>
</pre></div>
</div>
<p>The script will generate a test file(<code class="docutils literal"><span
class="pre">test.bin</span></code>), a mean file(<code class="docutils
literal"><span class="pre">mean.bin</span></code>) and
@@ -196,7 +199,7 @@ The script will generate these files in
<span id="training"></span><h3>Training<a class="headerlink" href="#training"
title="Permalink to this headline">¶</a></h3>
<ul>
<li><p class="first">After preparing data, you can run the following command
to train the Alexnet model.</p>
-<div class="highlight-python"><div class="highlight"><pre> sh run.sh
+<div class="highlight-default"><div class="highlight"><pre><span></span>
<span class="n">sh</span> <span class="n">run</span><span
class="o">.</span><span class="n">sh</span>
</pre></div>
</div>
</li>
Modified:
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/googlenet/README.html
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/googlenet/README.html?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
---
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/googlenet/README.html
(original)
+++
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/googlenet/README.html
Thu Mar 23 08:25:35 2017
@@ -31,6 +31,9 @@
+ <link rel="index" title="Index"
+ href="../../../../../genindex.html"/>
+ <link rel="search" title="Search" href="../../../../../search.html"/>
<link rel="top" title="incubator-singa 1.1.0 documentation"
href="../../../../../index.html"/>
<link href="../../../../../_static/style.css" rel="stylesheet"
type="text/css">
@@ -159,13 +162,13 @@ license: unrestricted https://github.com
<span id="instructions"></span><h2>Instructions<a class="headerlink"
href="#instructions" title="Permalink to this headline">¶</a></h2>
<ul>
<li><p class="first">Download the parameter checkpoint file into this
folder</p>
-<div class="highlight-python"><div class="highlight"><pre> $ wget
https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
+<div class="highlight-default"><div class="highlight"><pre><span></span> $
wget https://s3-ap-southeast-1.amazonaws.com/dlfile/bvlc_googlenet.tar.gz
$ tar xvf bvlc_googlenet.tar.gz
</pre></div>
</div>
</li>
<li><p class="first">Run the program</p>
-<div class="highlight-python"><div class="highlight"><pre> # use cpu
+<div class="highlight-default"><div class="highlight"><pre><span></span> #
use cpu
$ python serve.py -C &
# use gpu
$ python serve.py &
@@ -173,7 +176,7 @@ license: unrestricted https://github.com
</div>
</li>
<li><p class="first">Submit images for classification</p>
-<div class="highlight-python"><div class="highlight"><pre> $ curl -i -F
[email protected] http://localhost:9999/api
+<div class="highlight-default"><div class="highlight"><pre><span></span> $
curl -i -F [email protected] http://localhost:9999/api
$ curl -i -F [email protected] http://localhost:9999/api
$ curl -i -F [email protected] http://localhost:9999/api
</pre></div>
@@ -186,10 +189,10 @@ license: unrestricted https://github.com
<span id="details"></span><h2>Details<a class="headerlink" href="#details"
title="Permalink to this headline">¶</a></h2>
<p>We first extract the parameter values from <a class="reference external"
href="http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel">Caffe’s
checkpoint file</a> into a pickle version
After downloading the checkpoint file into <code class="docutils
literal"><span class="pre">caffe_root/python</span></code> folder, run the
following script</p>
-<div class="highlight-python"><div class="highlight"><pre><span class="c1">#
to be executed within caffe_root/python folder</span>
+<div class="highlight-default"><div class="highlight"><pre><span></span><span
class="c1"># to be executed within caffe_root/python folder</span>
<span class="kn">import</span> <span class="nn">caffe</span>
-<span class="kn">import</span> <span class="nn">numpy</span> <span
class="kn">as</span> <span class="nn">np</span>
-<span class="kn">import</span> <span class="nn">cPickle</span> <span
class="kn">as</span> <span class="nn">pickle</span>
+<span class="kn">import</span> <span class="nn">numpy</span> <span
class="k">as</span> <span class="nn">np</span>
+<span class="kn">import</span> <span class="nn">cPickle</span> <span
class="k">as</span> <span class="nn">pickle</span>
<span class="n">model_def</span> <span class="o">=</span> <span
class="s1">'../models/bvlc_googlenet/deploy.prototxt'</span>
<span class="n">weight</span> <span class="o">=</span> <span
class="s1">'bvlc_googlenet.caffemodel'</span> <span class="c1"># must
be downloaded at first</span>
@@ -201,7 +204,7 @@ After downloading the checkpoint file in
<span class="n">bias</span><span class="o">=</span><span
class="n">np</span><span class="o">.</span><span class="n">copy</span><span
class="p">(</span><span class="n">net</span><span class="o">.</span><span
class="n">params</span><span class="p">[</span><span
class="n">layer_name</span><span class="p">][</span><span
class="mi">1</span><span class="p">]</span><span class="o">.</span><span
class="n">data</span><span class="p">)</span>
<span class="n">params</span><span class="p">[</span><span
class="n">layer_name</span><span class="o">+</span><span
class="s1">'_weight'</span><span class="p">]</span><span
class="o">=</span><span class="n">weights</span>
<span class="n">params</span><span class="p">[</span><span
class="n">layer_name</span><span class="o">+</span><span
class="s1">'_bias'</span><span class="p">]</span><span
class="o">=</span><span class="n">bias</span>
- <span class="k">print</span> <span class="n">layer_name</span><span
class="p">,</span> <span class="n">weights</span><span class="o">.</span><span
class="n">shape</span><span class="p">,</span> <span class="n">bias</span><span
class="o">.</span><span class="n">shape</span>
+ <span class="nb">print</span> <span class="n">layer_name</span><span
class="p">,</span> <span class="n">weights</span><span class="o">.</span><span
class="n">shape</span><span class="p">,</span> <span class="n">bias</span><span
class="o">.</span><span class="n">shape</span>
<span class="k">with</span> <span class="nb">open</span><span
class="p">(</span><span class="s1">'bvlc_googlenet.pickle'</span><span
class="p">,</span> <span class="s1">'wb'</span><span class="p">)</span>
<span class="k">as</span> <span class="n">fd</span><span class="p">:</span>
<span class="n">pickle</span><span class="o">.</span><span
class="n">dump</span><span class="p">(</span><span class="n">params</span><span
class="p">,</span> <span class="n">fd</span><span class="p">)</span>
Modified:
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/resnet/README.html
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/resnet/README.html?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
---
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/resnet/README.html
(original)
+++
incubator/singa/site/trunk/en/docs/model_zoo/examples/imagenet/resnet/README.html
Thu Mar 23 08:25:35 2017
@@ -31,6 +31,9 @@
+ <link rel="index" title="Index"
+ href="../../../../../genindex.html"/>
+ <link rel="search" title="Search" href="../../../../../search.html"/>
<link rel="top" title="incubator-singa 1.1.0 documentation"
href="../../../../../index.html"/>
<link href="../../../../../_static/style.css" rel="stylesheet"
type="text/css">
@@ -158,19 +161,19 @@ license: Apache V2, https://github.com/f
<span id="instructions"></span><h2>Instructions<a class="headerlink"
href="#instructions" title="Permalink to this headline">¶</a></h2>
<ul>
<li><p class="first">Download one parameter checkpoint file (see below) and
the synset word file of ImageNet into this folder, e.g.,</p>
-<div class="highlight-python"><div class="highlight"><pre> $ wget
https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
+<div class="highlight-default"><div class="highlight"><pre><span></span> $
wget https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/resnet-18.tar.gz
$ wget https://s3-ap-southeast-1.amazonaws.com/dlfile/resnet/synset_words.txt
$ tar xvf resnet-18.tar.gz
</pre></div>
</div>
</li>
<li><p class="first">Usage</p>
-<div class="highlight-python"><div class="highlight"><pre> $ python serve.py
-h
+<div class="highlight-default"><div class="highlight"><pre><span></span> $
python serve.py -h
</pre></div>
</div>
</li>
<li><p class="first">Example</p>
-<div class="highlight-python"><div class="highlight"><pre> # use cpu
+<div class="highlight-default"><div class="highlight"><pre><span></span> #
use cpu
$ python serve.py --use_cpu --parameter_file resnet-18.pickle --model resnet
--depth 18 &
# use gpu
$ python serve.py --parameter_file resnet-18.pickle --model resnet --depth
18 &
@@ -185,7 +188,7 @@ license: Apache V2, https://github.com/f
</ul>
</li>
<li><p class="first">Submit images for classification</p>
-<div class="highlight-python"><div class="highlight"><pre> $ curl -i -F
[email protected] http://localhost:9999/api
+<div class="highlight-default"><div class="highlight"><pre><span></span> $
curl -i -F [email protected] http://localhost:9999/api
$ curl -i -F [email protected] http://localhost:9999/api
$ curl -i -F [email protected] http://localhost:9999/api
</pre></div>
@@ -199,7 +202,7 @@ license: Apache V2, https://github.com/f
<p>The parameter files were extracted from the original <a class="reference
external"
href="https://github.com/facebook/fb.resnet.torch/tree/master/pretrained">torch
files</a> via
the convert.py program.</p>
<p>Usage:</p>
-<div class="highlight-python"><div class="highlight"><pre>$ python convert.py
-h
+<div class="highlight-default"><div class="highlight"><pre><span></span>$
python convert.py -h
</pre></div>
</div>
</div>
Modified: incubator/singa/site/trunk/en/docs/model_zoo/examples/index.html
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/model_zoo/examples/index.html?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/model_zoo/examples/index.html (original)
+++ incubator/singa/site/trunk/en/docs/model_zoo/examples/index.html Thu Mar 23
08:25:35 2017
@@ -31,6 +31,9 @@
+ <link rel="index" title="Index"
+ href="../../../genindex.html"/>
+ <link rel="search" title="Search" href="../../../search.html"/>
<link rel="top" title="incubator-singa 1.1.0 documentation"
href="../../../index.html"/>
<link href="../../../_static/style.css" rel="stylesheet" type="text/css">
Modified:
incubator/singa/site/trunk/en/docs/model_zoo/examples/mnist/README.html
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/docs/model_zoo/examples/mnist/README.html?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
--- incubator/singa/site/trunk/en/docs/model_zoo/examples/mnist/README.html
(original)
+++ incubator/singa/site/trunk/en/docs/model_zoo/examples/mnist/README.html Thu
Mar 23 08:25:35 2017
@@ -31,6 +31,9 @@
+ <link rel="index" title="Index"
+ href="../../../../genindex.html"/>
+ <link rel="search" title="Search" href="../../../../search.html"/>
<link rel="top" title="incubator-singa 1.1.0 documentation"
href="../../../../index.html"/>
<link href="../../../../_static/style.css" rel="stylesheet"
type="text/css">
@@ -155,14 +158,14 @@ MNIST dataset. The RBM model and its hyp
<li><p class="first">Download the pre-processed <a class="reference external"
href="https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz">MNIST
dataset</a></p>
</li>
<li><p class="first">Start the training</p>
-<div class="highlight-python"><div class="highlight"><pre> python train.py
mnist.pkl.gz
+<div class="highlight-default"><div class="highlight"><pre><span></span> <span
class="n">python</span> <span class="n">train</span><span
class="o">.</span><span class="n">py</span> <span class="n">mnist</span><span
class="o">.</span><span class="n">pkl</span><span class="o">.</span><span
class="n">gz</span>
</pre></div>
</div>
</li>
</ol>
<p>By default the training code would run on CPU. To run it on a GPU card,
please start
the program with an additional argument</p>
-<div class="highlight-python"><div class="highlight"><pre> python train.py
mnist.pkl.gz --use_gpu
+<div class="highlight-default"><div class="highlight"><pre><span></span>
<span class="n">python</span> <span class="n">train</span><span
class="o">.</span><span class="n">py</span> <span class="n">mnist</span><span
class="o">.</span><span class="n">pkl</span><span class="o">.</span><span
class="n">gz</span> <span class="o">--</span><span class="n">use_gpu</span>
</pre></div>
</div>
</div>
Modified: incubator/singa/site/trunk/en/objects.inv
URL:
http://svn.apache.org/viewvc/incubator/singa/site/trunk/en/objects.inv?rev=1788191&r1=1788190&r2=1788191&view=diff
==============================================================================
Binary files - no diff available.